
The Power of Linear Regression Models
In this blog, you are going to learn
- What are Linear Regression Models?
- How to implement an Ordinary Least Square model?
- How to implement a Ridge Regression model?
- How to implement a Lasso Regression model?
- How to implement a Multi-Task Lasso model?
- How to implement an Elastic-Net model?
- How to implement a Multi-Task Elastic Net model?
- How to implement a Least Angle Regression model?
- How to implement a Lars Lasso model?
Linear Regression
Linear regression is a fundamental technique in machine learning and statistics, used for modeling the relationship between a dependent variable and one or more independent variables. By establishing this relationship, linear regression allows us to make predictions and gain insights into data patterns. As one of the most widely used and easily interpretable models, it serves as a solid foundation for understanding more complex machine learning algorithms.
The importance of linear regression in machine learning cannot be overstated. It is used in various applications, such as forecasting, resource allocation, and risk assessment, and it is a crucial tool in any data scientist’s toolkit. By mastering linear regression, you can build a strong foundation for more advanced machine learning techniques, such as logistic regression, support vector machines, and deep learning.
This comprehensive guide aims to provide a thorough understanding of linear regression in machine learning, including its key concepts, applications, and potential pitfalls. The blog post is structured as follows:
- Understanding Linear Regression: A deep dive into the basics of linear regression, its mathematical representation, and its various forms.
- Gradient Descent: An exploration of the optimization technique used to minimize the cost function in linear regression.
- Cost Function: A discussion of the metric used to measure the performance of the linear regression model.
- Regularization: An examination of the techniques used to prevent overfitting in linear regression models.
- Linear Regression in Python: A step-by-step guide to implementing linear regression using popular Python libraries.
- Practical Examples: Real-world applications and examples of linear regression in action.
- Advanced Topics: An introduction to more advanced techniques, such as polynomial regression and feature selection methods.
By the end of this guide, you will have a solid understanding of linear regression, its underlying principles, and how to apply it in real-world scenarios. So, let’s dive into the world of linear regression and demystify its concepts together.
Understanding Linear Regression
Linear regression is a supervised learning algorithm used to model the relationship between a dependent variable (also known as the target or response variable) and one or more independent variables (also known as predictors, features, or input variables). The primary objective of linear regression is to establish a linear relationship between these variables, enabling us to make predictions and analyze data patterns.
Definition of Linear Regression
Linear regression is a parametric method that assumes a linear relationship between the independent and dependent variables. By estimating the parameters, it can be used to make predictions or explain the variability in the target variable. There are two main types of linear regression: simple linear regression and multiple linear regression.
Mathematical Representation
The mathematical representation of linear regression is an equation that combines the input features with their corresponding weights (also called coefficients) and a bias term (also called the intercept). The general equation for a linear regression model is:
y = β0 + β1×1 + β2×2 + … + βnxn + ε
where:
y is the dependent variable (target) β0 is the bias term (intercept) β1, β2, …, βn are the coefficients (weights) of the independent variables x1, x2, …, xn ε is the error term, representing the difference between the predicted and actual values 1.3 Simple and Multiple Linear Regression
Simple linear regression, also known as univariate linear regression, involves a single independent variable. The model’s objective is to find the best-fitting straight line that describes the relationship between the input feature and the target variable. The equation for simple linear regression is:
y = β0 + β1×1 + ε
In contrast, multiple linear regression involves two or more independent variables. The model attempts to find the best-fitting hyperplane that describes the relationship between the input features and the target variable. The equation for multiple linear regression is the general equation mentioned earlier.
Assumptions Underlying Linear Regression
Linear regression relies on certain assumptions to provide valid and meaningful results. These assumptions are:
- Linearity: There is a linear relationship between the dependent variable and the independent variables. If this assumption is violated, the model’s predictions may be inaccurate.
- Independence: The observations in the dataset are independent of each other. This means that the outcome of one observation does not influence the outcome of another observation. This assumption is crucial for ensuring the model’s validity.
- Homoscedasticity: The variance of the error terms is constant across all levels of the independent variables. This assumption ensures that the model’s predictions are equally accurate for all values of the input features.
- Normality: The error terms (residuals) are normally distributed. This assumption allows us to make statistical inferences about the model’s parameters and apply hypothesis testing.
- No multicollinearity: In multiple linear regression, the independent variables should not be highly correlated with each other. High multicollinearity can lead to unreliable coefficient estimates and make it difficult to determine the individual contribution of each independent variable.
It is important to check and validate these assumptions before interpreting the results of a linear regression model. Violations of these assumptions may lead to biased or inaccurate estimates and predictions. Various diagnostic techniques, such as residual plots and statistical tests, can be used to assess the validity of these assumptions.
Cost Function
The cost function, also known as the loss function or objective function, is a measure of the performance of a machine learning model. In the context of linear regression, the cost function quantifies the difference between the predicted values and the actual values of the target variable. Minimizing the cost function is crucial to obtaining an accurate and well-fitted model.
Understanding the Cost Function
A well-chosen cost function captures the essence of the learning problem and guides the optimization process. In linear regression, the goal is to find the model parameters that minimize the discrepancies between the predicted and actual target values. The cost function should be designed to penalize large deviations from the true values while rewarding accurate predictions.
Common Cost Functions in Linear Regression
There are several cost functions used in linear regression, each with its own characteristics and properties. Some common cost functions include:
- Mean Squared Error (MSE): As mentioned earlier, the MSE calculates the average of the squared differences between the predicted and actual target values. It is a widely used cost function due to its simplicity and differentiability, which is essential for gradient-based optimization algorithms like gradient descent. MSE = (1/N) * Σ(y – (β0 + β1×1 + β2×2 + … + βnxn))^2
- Mean Absolute Error (MAE): The MAE calculates the average of the absolute differences between the predicted and actual target values. It is less sensitive to outliers than the MSE, as it does not involve squaring the errors. MAE = (1/N) * Σ|y – (β0 + β1×1 + β2×2 + … + βnxn)|
- Huber Loss: The Huber loss is a combination of the MSE and MAE. It behaves like the MSE for small errors and like the MAE for large errors. This hybrid property makes it robust to outliers while maintaining differentiability. Huber Loss = ΣLδ(y – (β0 + β1×1 + β2×2 + … + βnxn))
where Lδ is the Huber function, defined as:
Lδ(z) = 0.5 * z^2 if |z| <= δ Lδ(z) = δ * (|z| – 0.5 * δ) if |z| > δ
Selecting the Right Cost Function
The choice of cost function depends on the problem and the desired properties of the model. In general, the MSE is a good choice for linear regression, as it is simple, differentiable, and works well in most cases. However, if the dataset contains significant outliers or the distribution of the target variable is heavily skewed, the MAE or Huber loss might be more appropriate.
Dataset
The first step is to import all the necessary libraries.
from sklearn import datasets, linear_model from sklearn.model_selection import train_test_split import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.datasets import load_boston from sklearn.metrics import mean_squared_error, r2_score
Download Dataset
Attributes information
CRIM: Per capita crime rate by townZN: Proportion of residential land zoned for lots over 25,000 sq. ftINDUS: Proportion of non-retail business acres per townCHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)NOX: Nitric oxide concentration (parts per 10 million)RM: Average number of rooms per dwellingAGE: Proportion of owner-occupied units built prior to 1940DIS: Weighted distances to five Boston employment centersRAD: Index of accessibility to radial **highways**TAX: Full-value property tax rate per $10,000PTRATIO: Pupil-teacher ratio by townB: 1000(Bk — 0.63)², where Bk is the proportion of [people of African American descent] by townLSTAT: Percentage of the lower status of the populationMEDV: Median value of owner-occupied homes in $1000s
column_names=['CRIM','ZN','INDUS','CHAS','NOX','RM','AGE','DIS','RAD','TAX','PTRATIO','B','LSTAT','MEDV'] data_url = "https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data" X = pd.read_csv(data_url, sep="s+", names=column_names) X.head(5)
y=X.pop('MEDV') y
Once we have our training set and the target column, we will split the dataset into training and testing.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.10, random_state=42)
Linear Regression
The Linear Regression model aims to minimize the sum of squares between the target variable and the predicted value. The model fits the training dataset and finds coefficients for each variable and tries to predict the target variable.
Advantages
- Easy to understand and implement.
- The model is trained in no time.
Disadvantages
- It takes the assumption that all the features are independent.
- Prone to errors when data is co-linear.
- Prone to overfitting.
Let us implement a Linear Regression model on the Boston housing dataset.
ordinary_least_square = linear_model.LinearRegression() ordinary_least_square.fit(X_train,y_train)
Once the model has been trained on the dataset, let’s have a look at the coefficients for each variable.
ordinary_least_square.coef_
These are the values for w1–wn.
predicted_values=ordinary_least_square.predict(X_test) predicted_values
Let’s try to plot a graph between the target values and the predicted values.
plt.scatter(X_test['DIS'], y_test, color="black") plt.plot(X_test['DIS'], predicted_values, color="blue", linewidth=3)
Looking at the graph we can say that our model can learn from the dataset.
print("Mean squared error for Linear Regression : %.2f " % mean_squared_error(y_test, predicted_values)) print("Coefficient of determination for Linear Regression : %.2f Ordinary Least Square" % r2_score(y_test, predicted_values))
Regularization
Regularization is a technique used in machine learning to prevent overfitting, which occurs when a model learns the noise in the training data instead of the underlying patterns. Overfitting results in poor generalization performance on unseen data. In linear regression, regularization adds a penalty term to the cost function, which discourages the model from assigning excessively large weights to the input features.
Understanding Regularization
Regularization works by introducing a penalty term based on the model’s parameters (coefficients) to the cost function. The objective now becomes minimizing both the original cost function and the penalty term. This trade-off between fitting the data and maintaining simplicity helps prevent overfitting and ensures a more robust model.
Types of Regularization
There are two main types of regularization used in linear regression: L1 regularization (Lasso) and L2 regularization (Ridge).
- L1 Regularization (Lasso): Lasso regularization adds the absolute values of the model’s coefficients to the cost function. This promotes sparsity in the coefficients, effectively performing feature selection by driving some coefficients to zero. Lasso is particularly useful when there are a large number of features, and only a subset of them is believed to have a significant impact on the target variable. Cost Function with L1 Regularization = Original Cost Function + λ * Σ|βi|
where λ is the regularization parameter, which controls the strength of the penalty.
- L2 Regularization (Ridge): Ridge regularization adds the squared values of the model’s coefficients to the cost function. This discourages large coefficients but does not promote sparsity like Lasso. Ridge regularization is useful when there are correlated features, as it tends to distribute the weights evenly among them. Cost Function with L2 Regularization = Original Cost Function + λ * Σβi^2
Again, λ is the regularization parameter, which controls the strength of the penalty.
Choosing the Right Regularization
The choice between L1 and L2 regularization depends on the problem and the desired properties of the model. If feature selection is a priority and you believe that only a subset of features is relevant, Lasso regularization might be the better choice. If there are correlated features or you want to avoid sparse coefficients, Ridge regularization may be more appropriate.
In some cases, a combination of L1 and L2 regularization, called Elastic Net regularization, can provide the best of both worlds. The Elastic Net combines the Lasso and Ridge penalties, with a mixing parameter α that controls the balance between them:
Cost Function with Elastic Net Regularization = Original Cost Function + λ * [(1 – α) * Σβi^2 + α * Σ|βi|]
redge_regression = linear_model.Ridge(alpha=0.25) redge_regression

redge_regression.fit(X_train,y_train) redge_regression.coef_
ridge_predictions=redge_regression.predict(X_test) ridge_predictions
print("Mean squared error for Ridge regression : %.2f" % mean_squared_error(y_test, ridge_predictions)) print("Coefficient of determination for Ridge regression : %.2f" % r2_score(y_test, ridge_predictions))
We are getting a better result in Ridge regression as compared to the Ordinary Least Square.
Lasso Regression
Lasso Regression reduces the number of features upon which the target column is dependent by trying to find solutions with less non-zero coefficient. Its an extension of the Ordinary Least Square method with a regularization term l1.
Advantages:
- Can be used for simple datasets.
- It can be used for feature selection.
Disadvantages:
- Doesn’t work on the Multiple Linear Regression problem.
- Let’s implement a Lasso Regression model on the Boston Housing dataset.
lasso_regression = linear_model.Lasso(alpha=0.005) lasso_regression
Since we have a really small dataset, we have chosen the value of alpha =0.005. You can use various values to find the optimal one depending upon the dataset you are using.
lasso_regression.fit(X_train,y_train) lasso_regression.coef_
lasso_predictions=lasso_regression.predict(X_test) lasso_predictions
print("Mean squared error: %.2f for Lasso regression" % mean_squared_error(y_test, lasso_predictions)) print("Coefficient of determination: %.2f for Lasso regression" % r2_score(y_test, lasso_predictions))
Multi-Task Lasso
It is an extension of the Lasso regression. It selects the features that will be the same across time-period by fitting multiple regression problems together on the dataset.
Advantages:
- Works on Multiple Regression Problems.
Disadvantages:
- Once the features are selected, those features would be the same for all the regression problems.
Let’s implement the Multi_task Lasso algorithm on the Boston Housing Dataset.
The input to the Multi-Task Lasso should be a 2-Dimensional array. We are going to convert the series object to a 2-D array using NumPy’s reshape method.
y_train_2d=np.reshape(y_train.tolist(),(-2,1))
multi_task_lasso_regression = MultiTaskLasso(alpha=0.025).fit(X_train, y_train_2d) multi_task_lasso_regression
multi_task_lasso_predictions=multi_task_lasso_regression.predict(X_test) multi_task_lassso_predictions=np.reshape(multi_task_lasso_predictions,(1,-1)) multi_task_lassso_predictions[0]
print("Mean squared error: %.2f for Multi-Task Lasso regression" % mean_squared_error(y_test, multi_task_lassso_predictions[0])) print("Coefficient of determination: %.2f for Multi-Task Lasso regression" % r2_score(y_test, multi_task_lassso_predictions[0]))
We can see that the Mean Squared Error has been reduced. This means the algorithm is working well for this dataset.
Elastic Net
Elastic Net trains a model with both L1 and L2 regularization of the coefficients. It is useful to work with the dataset where features are co-related to one other.
Advantages:
- It uses both L1 and L2 regularization parameters.
- It is useful when the dataset is co-related.
- It inherits both the features of Ridge and Lasso Regression.
Disadvantages:
- It doesn’t work for Multiple Regression Problems.
from sklearn.linear_model import ElasticNet elastic_net = ElasticNet(alpha=0.001, l1_ratio=0.001) elastic_net.fit(X_train, y_train)
elastic_net.coef_
elastic_net_predictions=elastic_net.predict(X_test) elastic_net_predictions
print("Mean squared error: %.2f for Elastic Net regression" % mean_squared_error(y_test, elastic_net_predictions)) print("Coefficient of determination: %.2f for Elastic Net regression" % r2_score(y_test, elastic_net_predictions))
The ability to use both the regularization parameters L1 and L2 has helped in reducing the mean squared error. These parameters can be adjusted to get the best accuracy.
Multi-task Elastic-Net
It extends all the abilities of Elastic Net with the advantage to work on Multiple Regression problems jointly. Multi-task Elastic net finds the sparse coefficients for regression problems. The target variable y is a 2-d array.
multi_task_elastic_net = linear_model.MultiTaskElasticNet(alpha=0.001) y_train_2d=np.reshape(y_train.tolist(),(-2,1))
multi_task_elastic_net_regression = multi_task_elastic_net.fit(X_train, y_train_2d) multi_task_elastic_net_regression
multi_task_elastic_net_predictions=multi_task_elastic_net.predict(X_test) multi_task_elastic_net_predictions=np.reshape(multi_task_elastic_net_predictions,(1,-1)) multi_task_elastic_net_predictions[0]
print("Mean squared error: %.2f for Multi-task Elastic net" % mean_squared_error(y_test, multi_task_elastic_net_predictions[0])) print("Coefficient of determination: %.2f for Multi-task Elastic net" % r2_score(y_test, multi_task_elastic_net_predictions[0]))
lars.coef_
lars_predictions=lars.predict(X_test) lars_predictions
print("Mean squared error: %.2f for Lars" % mean_squared_error(y_test, lars_predictions)) print("Coefficient of determination: %.2f for Lars" % r2_score(y_test, lars_predictions))
Lars Lasso
This algorithm predicts a solution based on the piecewise linearity created as the function of the norm of the coefficients instead of taking an approach based on the coordinate descent.
lars_lasso = linear_model.LassoLars(alpha=.025, normalize=False) lars_lasso.fit(X_train,y_train) lars_lasso
lars_lasso.coef_
lars_lasso_predictions=lars_lasso.predict(X_test) lars_lasso_predictions
print("Mean squared error: %.2f for Lars Lasso" % mean_squared_error(y_test, lars_lasso_predictions)) print("Coefficient of determination: %.2f for Lars Lasso " % r2_score(y_test, lars_lasso_predictions))
Practical Examples and Real-world Applications
Linear regression has a wide range of applications across various industries and disciplines. In this section, we will explore some practical examples and real-world use cases of linear regression.
Predicting House Prices: Real estate is a popular domain for applying linear regression. Using historical data on house prices and their features (e.g., square footage, number of bedrooms, location), a multiple linear regression model can be trained to predict the price of a house based on its features. This can help both buyers and sellers make informed decisions and assist real estate agents in pricing properties more accurately.
Forecasting Sales: Businesses can use linear regression to forecast future sales based on historical data and external factors (e.g., economic indicators, seasonal trends). By understanding the relationship between these factors and sales performance, businesses can make better-informed decisions regarding inventory management, marketing strategies, and resource allocation.
Estimating Customer Lifetime Value: Customer Lifetime Value (CLV) is a critical metric for businesses, as it represents the total revenue a company can expect from a single customer over their lifetime. Linear regression can be used to model the relationship between customer characteristics (e.g., demographics, purchase history) and their CLV, helping businesses identify high-value customers and tailor their marketing efforts accordingly.
Analyzing the Impact of Advertising: Marketing teams can use linear regression to assess the effectiveness of advertising campaigns by analyzing the relationship between advertising spending and sales performance. By understanding which channels and campaigns have the most significant impact on sales, businesses can optimize their marketing budget and improve their return on investment.
Evaluating the Effects of Policies: In the public sector, linear regression can be employed to evaluate the impact of policies and interventions on various outcomes (e.g., crime rates, healthcare outcomes, and educational attainment). By analyzing the relationship between policy variables and their intended outcomes, policymakers can make data-driven decisions and design more effective interventions.
In conclusion, linear regression is a versatile and powerful tool with numerous applications across various industries and disciplines. By understanding the principles of linear regression and how to implement it using popular Python libraries, you can unlock valuable insights from your data and make more informed decisions.
Limitations and Alternatives
Despite its versatility and simplicity, linear regression has certain limitations that can affect its performance and suitability for specific problems. In this section, we will discuss some of these limitations and suggest alternative methods to address them.
Limitations of Linear Regression
- Linearity: Linear regression assumes that the relationship between the input features and the target variable is linear. This may not always be the case, and using linear regression for problems with non-linear relationships can lead to poor model performance.
- Multicollinearity: Linear regression can perform poorly when there is multicollinearity among the input features, meaning that some features are highly correlated with others. This can lead to unstable coefficient estimates and difficulty in interpreting the feature importance.
- Outliers: Linear regression is sensitive to outliers, as they can heavily influence the model’s coefficients and lead to poor generalization performance.
- Homoscedasticity: Linear regression assumes that the variance of the errors is constant across all levels of the input features. If this assumption is violated, the model’s predictions may be less accurate.
Alternative Methods
Several alternative methods can be used to address the limitations of linear regression:
- Polynomial Regression: Polynomial regression can be used to model non-linear relationships between the input features and the target variable by adding higher-degree polynomial terms to the input features. This can help capture more complex patterns in the data.
- Regularized Regression: As mentioned earlier, regularized regression methods like Lasso, Ridge, and Elastic Net can help mitigate the effects of multicollinearity and prevent overfitting.
- Robust Regression: Robust regression techniques, such as Huber regression or RANSAC, can be used to minimize the influence of outliers on the model’s coefficients and improve its generalization performance.
- Generalized Linear Models (GLMs): GLMs extend linear regression to handle non-normal error distributions and non-constant error variance, allowing for more flexibility in modeling various types of data.
- Decision Trees and Ensemble Methods: More advanced machine learning techniques, such as decision trees and ensemble methods (e.g., random forests, gradient boosting machines), can be used to model complex relationships between input features and target variables without relying on linear assumptions.
Conclusion
Linear regression is a powerful tool for modeling the relationship between a dependent variable and one or more independent variables. By understanding its mathematical representation and the assumptions underlying the model, you can effectively use linear regression to make predictions and gain insights into data patterns. In the next section, we will explore gradient descent, a crucial optimization technique used in linear regression.
Gradient descent is a powerful optimization technique that enables us to find the optimal coefficients for a linear regression model by minimizing the cost function. By understanding the steps involved in gradient descent and how it is applied in linear regression, we can effectively train our models and ensure accurate predictions. In the next section, we will delve deeper into the cost function and its role in measuring the performance of a linear regression model.
The cost function plays a vital role in linear regression by quantifying the performance of the model and guiding the optimization process. By understanding the different cost functions and their properties, we can select the most appropriate one for our problem and ensure accurate and well-fitted models. In the next section, we will explore regularization techniques used to prevent overfitting in linear regression models.
Regularization techniques help prevent overfitting in linear regression models by introducing a penalty term based on the model’s coefficients. By understanding the different types of regularization and their properties, we can select the most appropriate one for our problem and ensure a more robust and accurate model. In the next section, we will guide you through implementing linear regression using popular Python libraries.
Summary
- LinearRegression() : To implement an Ordinary Least Square model.
- Ridge(alpha=value) : To implement a Ridge Regression model.
- Lasso(alpha=value) : To implement a Lasso Regression model.
- MultiTaskLasso(alpha=value) : To implement a Multi-Task Lasso model.
- ElasticNet(alpha=value, l1_ratio=value) : To implement an Elastic-Net model.
- MultiTaskElasticNet(alpha=value) : To implement a Multi-Task Elastic Net model.
- Lars(n_nonzero_coefs=value, normalize=False) : To implement a Least Angle Regression model.
- LassoLars(alpha=value, normalize=False) : To implement a Lars Lasso model.
Must Read
- A Deep Dive into AUC-ROC Curve Analysis
- The Art of Model Evaluation: How to Interpret AUC ROC
- Perfecting the F1 Score: Optimizing Precision and Recall for Machine Learning
- Mastering the Balance for Optimal Machine Learning Performance
- Mastering VIF in Machine Learning for Robust Model Performance
- Harness the Power of PCA in Machine Learning
- Uncovering the Hidden Dangers of Multicollinearity.
- Unlock the Power of Feature Selection.
API’s
Quiz Time
Test your understanding of Linear Regression concepts and prepare well for interviews.