Regularization: Machine Learning Interview Prep 09
Regularization in machine learning is like adding constraints to prevent a model from becoming too complex. It’s like having guardrails on a road to keep a car from veering off course. This helps prevent overfitting, where the model memorizes the training data instead of learning patterns. Regularization encourages the model to generalize well to new, unseen data, making it more reliable in real-world situations.
Let’s check your basic knowledge of Regularization: Ridge, Lasso, and Elastic Net. Here are 10 multiple-choice questions for you and there’s no time limit. Have fun!
Question 1: An Ordinary Least Squares (OLS) is modified to minimize the absolute sum of the coefficients, this technique is called _____? (Select two)
(A) L1 regularization
(B) Lasso Regression
(C) L2 regularization
(D) Ridge Regression
Question 2: An Ordinary Least Squares (OLS) is modified to minimize the squared sum of the coefficients, this technique is called _____? (Select two)
(A) L2 regularization
(B) Ridge Regression
(C) L1 regularization
(D) Lasso Regression
Question 3: Which one is a regularization technique?
(A) Ridge Regression, which penalizes the sum of squared coefficients (L2 penalty)
(B) Lasso Regression, which penalizes the sum of absolute values of the coefficients (L1 penalty)
(C) Elastic Net, a convex combination of Ridge and Lasso
(D) All of the above
Question 4: Which one is a supervised Machine Learning technique?
(A) Ridge
(B) Lasso
(C) Elastic Net
(D) All of the above
Question 5: Which technique can overcome the drawbacks of Ridge and Lasso Regressor as well as a convex combination of these two algorithms?
(A) Support Vector Machine
(B) Linear Regression
(C) Elastic Net
(D) Logistic Regression
Question 6: Which one is correct about Elastic Net?
(A) Loss function = OLS loss function + 𝛼*Ridge Penalty + (1−𝛼)*LASSO Penalty; where 𝛼 ∈ [0,1]
(B) Loss function = 𝛼*Ridge Penalty + (1−𝛼)*LASSO Penalty; where 𝛼 ∈ [0,1]
(C) Loss function = OLS loss function + 𝛼 + (1−𝛼)*LASSO Penalty; where 𝛼 ∈ [0,1]
(D) None of these
Question 7: Which one is correct about Ridge regression?
(A) Loss function = OLS loss function + 𝛼*(square of the magnitude of the coefficients); where 0 < 𝛼 < ∞
(B) Loss function = 𝛼*(square of the magnitude of the coefficients); where 0 < 𝛼 < ∞
(C) Loss function = OLS loss function + 𝛼*(square root of the magnitude of the coefficients); where 0 < 𝛼 < ∞
(D) None of these
Question 8: Which one is correct about LASSO regression?
(A) Loss function = OLS loss function + 𝛼*(absolute sum of the coefficients); where 0 < 𝛼 < ∞
(B) Loss function = 𝛼*(absolute sum of the coefficients); where 0 < 𝛼 < ∞
(C) Loss function = OLS loss function + 𝛼*(sum of the coefficients); where 0 < 𝛼 < ∞
(D) None of these
Question 9: What are some solutions to prevent overfitting on sample data?
(A) Ridge
(B) Lasso
(C) Elastic Net
(D) All of the above
Question 10: If a linear regression model is underfitting, which regularization technique would you use?
(A) Ridge
(B) Lasso
(C) Elastic Net
(D) None of these
The solutions will be published in the next quiz Random Forest: Machine Learning Interview Prep 10.
Happy learning. If you like the questions and enjoy taking the test, please subscribe to my email list for the latest ML questions, follow my Medium profile, and leave a clap for me. Feel free to discuss your thoughts on these questions in the comment section. Don’t forget to share the quiz link with your friends or LinkedIn connections. If you want to connect with me on LinkedIn: my LinkedIn profile.
The solution of Linear Regression (Part 2): Machine Learning Interview Prep 08 - 1(D), 2(D), 3(A), 4(A), 5(B), 6(A, C), 7(D), 8(D), 9(A, B), 10(C)
References:
[1] Regularization in R Tutorial: Ridge, Lasso and Elastic Net, https://www.datacamp.com/community/tutorials/tutorial-ridge-lasso-elastic-net
[2] What is elastic net regularization, and how does it solve the drawbacks of Ridge and Lasso? https://stats.stackexchange.com/questions/184029/what-is-elastic-net-regularization-and-how-does-it-solve-the-drawbacks-of-ridge#:~:text=L2%20regularization%20is%20suited,in%20the%20p%3En%20case.
[3] Ridge and Lasso Regression: L1 and L2 Regularization, https://towardsdatascience.com/ridge-and-lasso-regression-a-complete-guide-with-python-scikit-learn-e20e34bcbf0b
[4] Linear Regression for Machine Learning, https://machinelearningmastery.com/linear-regression-for-machine-learning/
[5] Regularization Part 1: Ridge (L2) Regression, https://www.youtube.com/watch?v=Q81RR3yKn30&t=191s
[6] Regularization Part 2: Lasso (L1) Regression, https://www.youtube.com/watch?v=NGf0voTMlcs
[7] Ridge vs Lasso Regression, Visualized!!! https://www.youtube.com/watch?v=Xm2C_gTAl8c
[8] Regularization Part 3: Elastic Net Regression, https://www.youtube.com/watch?v=1dKRdX9bfIo