1.9.1
User Documentation for MADlib
Elastic Net Regularization

This module implements elastic net regularization for linear and logistic regression problems.

Training Function
The training function has the following syntax:
elastic_net_train( tbl_source,
                   tbl_result,
                   col_dep_var,
                   col_ind_var,
                   regress_family,
                   alpha,
                   lambda_value,
                   standardize,
                   grouping_col,
                   optimizer,
                   optimizer_params,
                   excluded,
                   max_iter,
                   tolerance
                 )

Arguments

tbl_source

TEXT. The name of the table containing the training data.

tbl_result

TEXT. Name of the generated table containing the output model. The output table produced by the elastic_net_train() function has the following columns:

regress_family The regression type: 'gaussian' or 'binomial'.
features An array of the features (independent variables) passed into the analysis.
features_selected An array of the features selected by the analysis.
coef_nonzero Fitting coefficients for the selected features.
coef_all Coefficients for all selected and unselected features
intercept Fitting intercept for the model.
log_likelihood The negative value of the first equation above (up to a constant depending on the data set).
standardize BOOLEAN. Whether the data was normalized (standardize argument was TRUE).
iteration_run The number of iterations executed.

col_dep_var

TEXT. An expression for the dependent variable.

Both col_dep_var and col_ind_var can be valid Postgres expressions. For example, col_dep_var = 'log(y+1)', and col_ind_var = 'array[exp(x[1]), x[2], 1/(1+x[3])]'. In the binomial case, you can use a Boolean expression, for example, col_dep_var = 'y < 0'.

col_ind_var

TEXT. An expression for the independent variables. Use '*' to specify all columns of tbl_source except those listed in the excluded string. If col_dep_var is a column name, it is automatically excluded from the independent variables. However, if col_dep_var is a valid Postgres expression, any column names used within the expression are only excluded if they are explicitly included in the excluded argument. It is a good idea to add all column names involved in the dependent variable expression to the excluded string.

regress_family

TEXT. The regression type, either 'gaussian' ('linear') or 'binomial' ('logistic').

alpha

FLOAT8. Elastic net control parameter, value in [0, 1], 1 for L-1 regularization, 0 for L-2.

lambda_value

FLOAT8. Regularization parameter, positive.

standardize (optional)

BOOLEAN, default: TRUE. Whether to normalize the data. Setting this to TRUE usually yields better results and faster convergence.

grouping_col (optional)

TEXT, default: NULL.

Note
Not currently implemented. Any non-NULL value is ignored. Grouping support will be added in a future release. When implemented, an expression list will be used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL GROUP BY clause. When this value is NULL, no grouping is used and a single result model is generated.
optimizer (optional)

TEXT, default: 'fista'. Name of optimizer, either 'fista' or 'igd'.

optimizer_params (optional)

TEXT, default: NULL. Optimizer parameters, delimited with commas. The parameters differ depending on the value of optimizer. See the descriptions below for details.

excluded (optional)

TEXT, default: NULL. If the col_ind_var input is '*' then excluded can be provided as a comma-delimited list of column names that are to be excluded from the features. For example, 'col1, col2'. If the col_ind_var is an array, excluded must be a list of the integer array positions to exclude, for example '1,2'. If this argument is NULL or an empty string '', no columns are excluded.

max_iter (optional)

INTEGER, default: 1000. The maximum number of iterations that are allowed.

tolerance
FLOAT8, default: default is 1e-6. The criteria to end iterations. Both the 'fista' and 'igd' optimizers compute the difference between the loglikelihood of two consecutive iterations, and when the difference is smaller than tolerance or the iteration number is larger than max_iter, the computation stops.

Optimizer Parameters
Optimizer parameters are supplied in a string containing a comma-delimited list of name-value pairs. All of these named parameters are optional, and their order does not matter. You must use the format "<param_name> = <value>" to specify the value of a parameter, otherwise the parameter is ignored.

When the elastic_net_train() optimizer argument value is 'fista', the optimizer_params argument is a string containing name-value pairs with the following format. (Line breaks are inserted for readability.)

  'max_stepsize = <value>,
   eta = <value>,
   warmup = <value>,
   warmup_lambdas = <value>,
   warmup_lambda_no = <value>,
   warmup_tolerance = <value>,
   use_active_set = <value>,
   activeset_tolerance = <value>,
   random_stepsize = <value>'

Parameters

max_stepsize
Default: 4.0. Initial backtracking step size. At each iteration, the algorithm first tries stepsize = max_stepsize, and if it does not work out, it then tries a smaller step size, stepsize = stepsize/eta, where eta must be larger than 1. At first glance, this seems to perform repeated iterations for even one step, but using a larger step size actually greatly increases the computation speed and minimizes the total number of iterations. A careful choice of max_stepsize can decrease the computation time by more than 10 times.
eta

Default: 2. If stepsize does not work stepsize / eta is tried. Must be greater than 1.

warmup

Default: FALSE. If warmup is TRUE, a series of lambda values, which is strictly descent and ends at the lambda value that the user wants to calculate, is used. The larger lambda gives very sparse solution, and the sparse solution again is used as the initial guess for the next lambda's solution, which speeds up the computation for the next lambda. For larger data sets, this can sometimes accelerate the whole computation and may be faster than computation on only one lambda value.

warmup_lambdas

Default: NULL. The lambda value series to use when warmup is True. The default is NULL, which means that lambda values will be automatically generated.

warmup_lambda_no

Default: 15. How many lambdas are used in warm-up. If warmup_lambdas is not NULL, this value is overridden by the number of provided lambda values.

warmup_tolerance

The value of tolerance used during warmup. The default is the same as the tolerance argument.

use_active_set

Default: FALSE. If use_active_set is TRUE, an active-set method is used to speed up the computation. Considerable speedup is obtained by organizing the iterations around the active set of features—those with nonzero coefficients. After a complete cycle through all the variables, we iterate on only the active set until convergence. If another complete cycle does not change the active set, we are done, otherwise the process is repeated.

activeset_tolerance

Default: the value of the tolerance argument. The value of tolerance used during active set calculation.

random_stepsize
Default: FALSE. Whether to add some randomness to the step size. Sometimes, this can speed up the calculation.

When the elastic_net_train() optimizer argument value is 'igd', the optimizer_params argument is a string containing name-value pairs with the following format. (Line breaks are inserted for readability.)

  'stepsize = <value>,
   step_decay = <value>,
   threshold = <value>,
   warmup = <value>,
   warmup_lambdas = <value>,
   warmup_lambda_no = <value>,
   warmup_tolerance = <value>,
   parallel = <value>'

Parameters

stepsize
The default is 0.01.
step_decay
The actual setpsize used for current step is (previous stepsize) / exp(setp_decay). The default value is 0, which means that a constant stepsize is used in IGD.
threshold

Default: 1e-10. When a coefficient is really small, set this coefficient to be 0.

Due to the stochastic nature of SGD, we can only obtain very small values for the fitting coefficients. Therefore, threshold is needed at the end of the computation to screen out tiny values and hard-set them to zeros. This is accomplished as follows: (1) multiply each coefficient with the standard deviation of the corresponding feature; (2) compute the average of absolute values of re-scaled coefficients; (3) divide each rescaled coefficient with the average, and if the resulting absolute value is smaller than threshold, set the original coefficient to zero.

warmup
Default: FALSE. If warmup is TRUE, a series of lambda values, which is strictly descent and ends at the lambda value that the user wants to calculate, is used. The larger lambda gives very sparse solution, and the sparse solution again is used as the initial guess for the next lambda's solution, which speeds up the computation for the next lambda. For larger data sets, this can sometimes accelerate the whole computation and may be faster than computation on only one lambda value.
warmup_lambdas
Default: NULL. An array of lambda values to use for warmup.
warmup_lambda_no
The number of lambdas used in warm-up. The default is 15. If warmup_lambdas is not NULL, this argument is overridden by the size of the warmup_lambdas array.
warmup_tolerance
The value of tolerance used during warmup.The default is the same as the tolerance argument.
parallel

Whether to run the computation on multiple segments. The default is True.

SGD is a sequential algorithm in nature. When running in a distributed manner, each segment of the data runs its own SGD model and then the models are averaged to get a model for each iteration. This averaging might slow down the convergence speed, although we also acquire the ability to process large datasets on multiple machines. This algorithm, therefore, provides the parallel option to allow you to choose whether to do parallel computation.

Prediction Function

Per-Tuple Prediction

The prediction function returns a double value for Gaussian family and boolean value for Binomial family.

The predict function has the following syntax (elastic_net_gaussian_predict() and elastic_net_binomial_predict()):

elastic_net_<family>_predict(
                     coefficients,
                     intercept,
                     ind_var
                   )

Arguments

coefficients
DOUBLE PRECISION[]. Fitting coefficients, usually coef_all or coef_nonzero.
intercept
DOUBLE PRECISION. The intercept for the model.
ind_var
DOUBLE PRECISION[]. Independent variables that correspond to coefficients, use features column in tbl_result for coef_all, and features_selected for coef_nonzero. See also examples. Note that unexpected results or errors may be returned in the case that this argument is not given properly.

For binomial family, there is a function (elastic_net_binomial_prob()) that outputs the probability of the instance being True:

elastic_net_binomial_prob(
                     coefficients,
                     intercept,
                     ind_var
                   )

Per-Table Prediction

Alternatively, you can use another prediction function that stores the prediction result in a table (elastic_net_predict()). This is useful if you want to use elastic net together with the general cross validation function.

elastic_net_predict( tbl_model,
                     tbl_new_sourcedata,
                     col_id,
                     tbl_predict
                   )

Arguments

tbl_model
TEXT. The name of the table containing the output from the training function.
tbl_new_sourcedata
TEXT. The name of the table containing the new source data.
col_id
TEXT. The unique ID associated with each row.
tbl_predict
TEXT. The name of table to store the prediction result.

You do not need to specify whether the model is "linear" or "logistic" because this information is already included in the tbl_model table.

Examples
  1. Display online help for the elastic_net_train() function.
    SELECT madlib.elastic_net_train();
    
  2. Create an input data set.
    DROP TABLE IF EXISTS houses;
    CREATE TABLE houses ( id INT,
                          tax INT,
                          bedroom INT,
                          bath FLOAT,
                          price INT,
                          size INT,
                          lot INT
                        );
    COPY houses FROM STDIN WITH DELIMITER '|';
      1 |  590 |       2 |    1 |  50000 |  770 | 22100
      2 | 1050 |       3 |    2 |  85000 | 1410 | 12000
      3 |   20 |       3 |    1 |  22500 | 1060 |  3500
      4 |  870 |       2 |    2 |  90000 | 1300 | 17500
      5 | 1320 |       3 |    2 | 133000 | 1500 | 30000
      6 | 1350 |       2 |    1 |  90500 |  820 | 25700
      7 | 2790 |       3 |  2.5 | 260000 | 2130 | 25000
      8 |  680 |       2 |    1 | 142500 | 1170 | 22000
      9 | 1840 |       3 |    2 | 160000 | 1500 | 19000
     10 | 3680 |       4 |    2 | 240000 | 2790 | 20000
     11 | 1660 |       3 |    1 |  87000 | 1030 | 17500
     12 | 1620 |       3 |    2 | 118600 | 1250 | 20000
     13 | 3100 |       3 |    2 | 140000 | 1760 | 38000
     14 | 2070 |       2 |    3 | 148000 | 1550 | 14000
     15 |  650 |       3 |  1.5 |  65000 | 1450 | 12000
    \.
    
  3. Train the model.
    DROP TABLE IF EXISTS houses_en;
    SELECT madlib.elastic_net_train( 'houses',
                                     'houses_en',
                                     'price',
                                     'array[tax, bath, size]',
                                     'gaussian',
                                     0.5,
                                     0.1,
                                     TRUE,
                                     NULL,
                                     'fista',
                                     '',
                                     NULL,
                                     10000,
                                     1e-6
                                   );
    
  4. View the resulting model.
    -- Turn on expanded display to make it easier to read results.
    \x on
    SELECT * FROM houses_en;
    
    Result:
    -[ RECORD 1 ]-----+--------------------------------------------
    family            | gaussian
    features          | {tax,bath,size}
    features_selected | {tax,bath,size}
    coef_nonzero      | {27.6945611671,11509.0099734,49.0945557639}
    coef_all          | {27.6945611671,11509.0099734,49.0945557639}
    intercept         | -11145.5061503
    log_likelihood    | -490118975.406
    standardize       | t
    iteration_run     | 322
    
  5. Use the prediction function to evaluate residuals.
    \x off
    SELECT id, price, predict, price - predict AS residual
    FROM (
        SELECT
            houses.*,
            madlib.elastic_net_gaussian_predict(
                m.coef_all,
                m.intercept,
                ARRAY[tax,bath,size]
                ) AS predict
        FROM houses, houses_en m) s
    ORDER BY id;
    

Additional Example (when coef_nonzero is different from coef_all)

  1. Reuse the houses table above and train the model with alpha=1 (L-1) and a large lambda (30000).
    DROP TABLE IF EXISTS houses_en2;
    SELECT madlib.elastic_net_train( 'houses',
                                     'houses_en2',
                                     'price',
                                     'array[tax, bath, size]',
                                     'gaussian',
                                     1,
                                     30000,
                                     TRUE,
                                     NULL,
                                     'fista',
                                     '',
                                     NULL,
                                     10000,
                                     1e-6
                                   );
    
  2. View the resulting model and see coef_nonzero is different from coef_all.
    -- Turn on expanded display to make it easier to read results.
    \x on
    SELECT * FROM houses_en2;
    
    Result:
    -[ RECORD 1 ]-----+--------------------------------------------
    features          | {tax,bath,size}
    features_selected | {tax,size}
    coef_nonzero      | {13.3261747481,22.7347986045}
    coef_all          | {13.3261747481,0,22.7347986045}
    intercept         | 68877.5045405
    log_likelihood    | -1694746275.43
    standardize       | t
    iteration_run     | 115
    
  3. We can still use the prediction function with coef_all to evaluate residuals.
    \x off
    SELECT id, price, predict, price - predict AS residual
    FROM (
        SELECT
            houses.*,
            madlib.elastic_net_gaussian_predict(
                m.coef_all,
                m.intercept,
                ARRAY[tax,bath,size]
                ) AS predict
        FROM houses, houses_en2 m) s
    ORDER BY id;
    
  4. While we can also speed up the prediction function with coef_nonzero to evaluate residuals. This requires user to examine the feature_selected column in the result table to construct the correct independent variables.
    \x off
    SELECT id, price, predict, price - predict AS residual
    FROM (
        SELECT
            houses.*,
            madlib.elastic_net_gaussian_predict(
                m.coef_nonzero,
                m.intercept,
                ARRAY[tax,size]
                ) AS predict
        FROM houses, houses_en2 m) s
    ORDER BY id;
    
    The two queries are expected to give same residuals:
     id | price  |     predict      |     residual
    ----+--------+------------------+-------------------
      1 |  50000 |  94245.742567344 |  -44245.742567344
      3 |  22500 |  93242.914556232 |  -70742.914556232
      5 | 133000 | 120570.253114742 |   12429.746885258
      7 | 260000 | 154482.653115284 |  105517.346884716
      9 | 160000 | 127499.863983754 |   32500.136016246
     11 |  87000 | 114415.797184981 |  -27415.797184981
     13 | 140000 |  150201.89180353 |   -10201.89180353
     15 |  65000 |  110504.97610329 |   -45504.97610329
      2 |  85000 |  114926.05405835 |   -29926.05405835
      4 |  90000 | 110026.514757197 |  -20026.514757197
      6 |  90500 | 105510.375306125 |  -15010.375306125
      8 | 142500 | 104539.017736473 |   37960.982263527
     10 | 240000 | 181347.915720063 |   58652.084279937
     12 | 118600 | 118884.405888047 | -284.405888046997
     14 | 148000 | 131701.624106042 |   16298.375893958
    (15 rows)
    

Note
It is strongly recommended that you run elastic_net_train() on a subset of the data with a limited max_iter before applying it to the full data set with a large max_iter. In the pre-run, you can adjust the parameters to get the best performance and then apply the best set of parameters to the whole data set.

Technical Background

Elastic net regularization seeks to find a weight vector that, for any given training example set, minimizes:

\[\min_{w \in R^N} L(w) + \lambda \left(\frac{(1-\alpha)}{2} \|w\|_2^2 + \alpha \|w\|_1 \right)\]

where $L$ is the metric function that the user wants to minimize. Here $ \alpha \in [0,1] $ and $ lambda \geq 0 $. If $alpha = 0$, we have the ridge regularization (known also as Tikhonov regularization), and if $\alpha = 1$, we have the LASSO regularization.

For the Gaussian response family (or linear model), we have

\[L(\vec{w}) = \frac{1}{2}\left[\frac{1}{M} \sum_{m=1}^M (w^{t} x_m + w_{0} - y_m)^2 \right] \]

For the Binomial response family (or logistic model), we have

\[ L(\vec{w}) = \sum_{m=1}^M\left[y_m \log\left(1 + e^{-(w_0 + \vec{w}\cdot\vec{x}_m)}\right) + (1-y_m) \log\left(1 + e^{w_0 + \vec{w}\cdot\vec{x}_m}\right)\right]\ , \]

where $y_m \in {0,1}$.

To get better convergence, one can rescale the value of each element of x

\[ x' \leftarrow \frac{x - \bar{x}}{\sigma_x} \]

and for Gaussian case we also let

\[y' \leftarrow y - \bar{y} \]

and then minimize with the regularization terms. At the end of the calculation, the orginal scales will be restored and an intercept term will be obtained at the same time as a by-product.

Note that fitting after scaling is not equivalent to directly fitting.

Literature

[1] Elastic net regularization. http://en.wikipedia.org/wiki/Elastic_net_regularization

[2] Beck, A. and M. Teboulle (2009), A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. on Imaging Sciences 2(1), 183-202.

[3] Shai Shalev-Shwartz and Ambuj Tewari, Stochastic Methods for l1 Regularized Loss Minimization. Proceedings of the 26th International Conference on Machine Learning, Montreal, Canada, 2009.

Related Topics

File elastic_net.sql_in documenting the SQL functions.

grp_validation