The lasso is a shrinkage method, which is estimated by (3.51): \[ \begin{equation} \hat{\beta}^{lasso} = \underset{\beta}{argmin}\sum_{i=1}^N \left( y_i-\beta_0-\sum_{j=1}^p{x_{ij}\beta_j} \right)^2\\ \text{ subject to } \sum_{j=1}^p|\beta_j| \le t \end{equation} \]
we can re-parametrize the constant \(\beta_0\) by standartizing the predictors; the solution for \(\beta_0\) is \(\overline{y}\).
We can also write the lasso in the equivalent Lagragngian form (3.52):
\[\hat{\beta}^{lasso} = \underset{\beta}{argmin} \left\{ \frac{1}{2}\sum_{i=1}^N \left( y_i-\beta_0-\sum_{j=1}^p{x_{ij}\beta_j} \right)^2 +\lambda\sum_{j=1}^p|\beta_j| \right\}\]
The lasso solution is nonlinear in the \(y_i\) and it is a quadratic programming problem.