2.1.0
User Documentation for Apache MADlib

Linear regression models a linear relationship of a scalar dependent variable \( y \) to one or more explanatory independent variables \( x \) and builds a model of coefficients.

Training Function

The linear regression training function has the following syntax.

linregr_train( source_table,
               out_table,
               dependent_varname,
               independent_varname,
               grouping_cols,
               heteroskedasticity_option
             )

Arguments

source_table

TEXT. Name of the table containing the training data.

out_table

TEXT. Name of the generated table containing the output model.

The output table contains the following columns:

<...> Any grouping columns provided during training. Present only if the grouping option is used.
coef FLOAT8[]. Vector of the coefficients of the regression.
r2 FLOAT8. R-squared coefficient of determination of the model.
std_err FLOAT8[]. Vector of the standard error of the coefficients.
t_stats FLOAT8[]. Vector of the t-statistics of the coefficients.
p_values FLOAT8[]. Vector of the p-values of the coefficients.
condition_no FLOAT8 array. The condition number of the \(X^{*}X\) matrix. A high condition number is usually an indication that there may be some numeric instability in the result yielding a less reliable model. A high condition number often results when there is a significant amount of colinearity in the underlying design matrix, in which case other regression techniques, such as elastic net regression, may be more appropriate.
bp_stats FLOAT8. The Breush-Pagan statistic of heteroskedacity. Present only if the heteroskedacity argument was set to True when the model was trained.
bp_p_value FLOAT8. The Breush-Pagan calculated p-value. Present only if the heteroskedacity parameter was set to True when the model was trained.
num_rows_processed BIGINT. The number of rows that are actually used in each group.
num_missing_rows_skipped INTEGER. The number of rows that have NULL values in the dependent and independent variables, and were skipped in the computation for each group.
variance_covariance FLOAT[]. Variance/covariance matrix.

A summary table named <out_table>_summary is created together with the output table. It has the following columns:

method 'linregr' for linear regression.
source_table The data source table name
out_table The output table name
dependent_varname The dependent variable
independent_varname The independent variables
num_rows_processed The total number of rows that were used in the computation.
num_missing_rows_skipped The total number of rows that were skipped because of NULL values in them.
grouping_cols Names of the grouping columns.
Note
For p-values, we just return the computation result directly. Other statistical packages like 'R' produce the same result, but on printing the result to screen, another format function is used and any p-value that is smaller than the machine epsilon (the smallest positive floating-point number 'x' such that '1 + x != 1') will be printed on screen as "< xxx" (xxx is the value of the machine epsilon). Although the result may look different, they are in fact the same.
dependent_varname

TEXT. Expression to evaluate for the dependent variable.

independent_varname

TEXT. Expression list to evaluate for the independent variables. An intercept variable is not assumed. It is common to provide an explicit intercept term by including a single constant 1 term in the independent variable list.

grouping_cols (optional)

TEXT, default: NULL. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL GROUP BY clause. When this value is null, no grouping is used and a single result model is generated for the whole data set.

heteroskedasticity_option (optional)
BOOLEAN, default: FALSE. When TRUE, the heteroskedasticity of the model is also calculated and returned with the results.

Warning
The aggregate 'linregr' has been deprecated in favor of the function 'linregr_train'. If the aggregate 'linregr' is used to output the results of linear regression to a table, it is recommended to follow the general pattern shown below (replace text within '<...>' with the appropriate variable names).
CREATE TABLE <output table> AS
SELECT (r).*
FROM (
    SELECT linregr(<dependent variable>, <independent variable>) as r
    FROM <source table>
    ) q;

Prediction Function
The prediction function is as follows:
linregr_predict(coef, col_ind)
Arguments
coef
FLOAT8[]. Vector of the coefficients of regression from training.
col_ind

FLOAT8[]. An array containing the independent variable column names, as was used for the training.

Examples
  1. Create an input data set.
    DROP TABLE IF EXISTS houses;
    CREATE TABLE houses (id INT, tax INT, bedroom INT, bath FLOAT, price INT,
                size INT, lot INT);
    INSERT INTO houses VALUES
      (1 ,  590 ,       2 ,    1 ,  50000 ,  770 , 22100),
      (2 , 1050 ,       3 ,    2 ,  85000 , 1410 , 12000),
      (3 ,   20 ,       3 ,    1 ,  22500 , 1060 ,  3500),
      (4 ,  870 ,       2 ,    2 ,  90000 , 1300 , 17500),
      (5 , 1320 ,       3 ,    2 , 133000 , 1500 , 30000),
      (6 , 1350 ,       2 ,    1 ,  90500 ,  820 , 25700),
      (7 , 2790 ,       3 ,  2.5 , 260000 , 2130 , 25000),
      (8 ,  680 ,       2 ,    1 , 142500 , 1170 , 22000),
      (9 , 1840 ,       3 ,    2 , 160000 , 1500 , 19000),
     (10 , 3680 ,       4 ,    2 , 240000 , 2790 , 20000),
     (11 , 1660 ,       3 ,    1 ,  87000 , 1030 , 17500),
     (12 , 1620 ,       3 ,    2 , 118600 , 1250 , 20000),
     (13 , 3100 ,       3 ,    2 , 140000 , 1760 , 38000),
     (14 , 2070 ,       2 ,    3 , 148000 , 1550 , 14000),
     (15 ,  650 ,       3 ,  1.5 ,  65000 , 1450 , 12000);
    
  2. Train a regression model. First, we generate a single regression for all data.
    DROP TABLE IF EXISTS houses_linregr, houses_linregr_summary;
    SELECT madlib.linregr_train( 'houses',
                                 'houses_linregr',
                                 'price',
                                 'ARRAY[1, tax, bath, size]'
                               );
    
    (Note that in this example we are dynamically creating the array of independent variables from column names. If you have large numbers of independent variables beyond the PostgreSQL limit of maximum columns per table, you would pre-build the arrays and store them in a single column.)
  3. Next we generate three output models, one for each value of "bedroom".
    DROP TABLE IF EXISTS houses_linregr_bedroom, houses_linregr_bedroom_summary;
    SELECT madlib.linregr_train( 'houses',
                                 'houses_linregr_bedroom',
                                 'price',
                                 'ARRAY[1, tax, bath, size]',
                                 'bedroom'
                               );
    
  4. Examine the resulting models.
    -- Set extended display on for easier reading of output
    \x ON
    SELECT * FROM houses_linregr;
    
    Result:
    -[ RECORD 1 ]+---------------------------------------------------------------------------
    coef                     | {-12849.4168959872,28.9613922651765,10181.6290712648,50.516894915354}
    r2                       | 0.768577580597443
    std_err                  | {33453.0344331391,15.8992104963997,19437.7710925923,32.928023174087}
    t_stats                  | {-0.38410317968819,1.82156166004184,0.523806408809133,1.53416118083605}
    p_values                 | {0.708223134615422,0.0958005827189772,0.610804093526536,0.153235085548186}
    condition_no             | 9002.50457085737
    num_rows_processed       | 15
    num_missing_rows_skipped | 0
    variance_covariance      | {{1119105512.78479,217782.067878023,-283344228.394562,-616679.69319088}, ...
    
    Alternatively you can unnest the results for easier reading of output.
    \x OFF
    SELECT unnest(ARRAY['intercept','tax','bath','size']) as attribute,
           unnest(coef) as coefficient,
           unnest(std_err) as standard_error,
           unnest(t_stats) as t_stat,
           unnest(p_values) as pvalue
    FROM houses_linregr;
    
    Result:
     attribute |    coefficient    |  standard_error  |      t_stat       |       pvalue
    -----------+-------------------+------------------+-------------------+--------------------
     intercept | -12849.4168959872 | 33453.0344331391 | -0.38410317968819 |  0.708223134615422
     tax       |  28.9613922651765 | 15.8992104963997 |  1.82156166004184 | 0.0958005827189772
     bath      |  10181.6290712648 | 19437.7710925923 | 0.523806408809133 |  0.610804093526536
     size      |   50.516894915354 |  32.928023174087 |  1.53416118083605 |  0.153235085548186
    (4 rows)
    
  5. View the results grouped by bedroom.
    \x ON
    SELECT * FROM houses_linregr_bedroom ORDER BY bedroom;
    
    Result:
    -[ RECORD 1 ]------------+----------------------------------------------------------------
    bedroom                  | 4
    coef                     | {0.0112536020318378,41.4132554771633,0.0225072040636757,31.3975496688276}
    r2                       | 1
    std_err                  | {0,0,0,0}
    t_stats                  | {Infinity,Infinity,Infinity,Infinity}
    p_values                 |
    condition_no             | Infinity
    num_rows_processed       | 1
    num_missing_rows_skipped | 0
    variance_covariance      | {{0,0,0,0},{0,0,0,0},{0,0,0,0},{0,0,0,0}}
    -[ RECORD 2 ]------------+----------------------------------------------------------------
    bedroom                  | 3
    coef                     | {-88155.8292501601,27.1966436294429,41404.0293363612,62.637521075324}
    r2                       | 0.841699901311252
    std_err                  | {57867.9999702625,17.8272309154689,43643.1321511114,70.8506824863954}
    t_stats                  | {-1.52339512849005,1.52556747362508,0.948695185143966,0.884077878676067}
    p_values                 | {0.188161432894871,0.187636685729869,0.386340032374927,0.417132778705789}
    condition_no             | 11722.6225642147
    num_rows_processed       | 9
    num_missing_rows_skipped | 0
    variance_covariance      | {{3348705420.5583,433697.545104226,-70253017.45773,-2593488.13800193}, ...
    -[ RECORD 3 ]------------+----------------------------------------------------------------
    bedroom                  | 2
    coef                     | {-84242.0345406597,55.4430144648696,-78966.9753675319,225.611910021192}
    r2                       | 0.968809546465313
    std_err                  | {35018.9991665742,19.5731125320686,23036.8071292552,49.0448678148784}
    t_stats                  | {-2.40560942761235,2.83261103077151,-3.42786111480046,4.60011251070697}
    p_values                 | {0.250804617665239,0.21605133377602,0.180704400437373,0.136272031474122}
    condition_no             | 10086.1048721726
    num_rows_processed       | 5
    num_missing_rows_skipped | 0
    variance_covariance      | {{1226330302.62852,-300921.595596804,551696673.397849,-1544160.63236119}, ...
    
  6. Compare predicted price with actual. (This example uses the original data table to perform the prediction. Typically a different test dataset with the same features as the original training dataset would be used for prediction.)
    \x OFF
    SELECT houses.*,
           madlib.linregr_predict( m.coef,
                                   ARRAY[1,tax,bath,size]
                                 ) as predict,
            price -
              madlib.linregr_predict( m.coef,
                                      ARRAY[1,tax,bath,size]
                                    ) as residual
    FROM houses, houses_linregr m ORDER BY id;
    
    Result:
     id | tax  | bedroom | bath | price  | size |  lot  |     predict      |     residual
    ----+------+---------+------+--------+------+-------+------------------+-------------------
      1 |  590 |       2 |    1 |  50000 |  770 | 22100 | 53317.4426965542 | -3317.44269655424
      2 | 1050 |       3 |    2 |  85000 | 1410 | 12000 | 109152.124955627 | -24152.1249556268
      3 |   20 |       3 |    1 |  22500 | 1060 |  3500 | 51459.3486308563 | -28959.3486308563
      4 |  870 |       2 |    2 |  90000 | 1300 | 17500 |  98382.215907206 | -8382.21590720605
      5 | 1320 |       3 |    2 | 133000 | 1500 | 30000 | 121518.221409606 |  11481.7785903937
      6 | 1350 |       2 |    1 |  90500 |  820 | 25700 | 77853.9455638561 |  12646.0544361439
      7 | 2790 |       3 |  2.5 | 260000 | 2130 | 25000 | 201007.926371721 |  58992.0736282788
      8 |  680 |       2 |    1 | 142500 | 1170 | 22000 | 76130.7259665617 |  66369.2740334383
      9 | 1840 |       3 |    2 | 160000 | 1500 | 19000 | 136578.145387498 |  23421.8546125019
     10 | 3680 |       4 |    2 | 240000 | 2790 | 20000 |  255033.90159623 | -15033.9015962295
     11 | 1660 |       3 |    1 |  87000 | 1030 | 17500 | 97440.5250982852 | -10440.5250982852
     12 | 1620 |       3 |    2 | 118600 | 1250 | 20000 | 117577.415360321 |  1022.58463967926
     13 | 3100 |       3 |    2 | 140000 | 1760 | 38000 | 186203.892319613 | -46203.8923196126
     14 | 2070 |       2 |    3 | 148000 | 1550 | 14000 | 155946.739425521 | -7946.73942552117
     15 |  650 |       3 |  1.5 |  65000 | 1450 | 12000 | 94497.4293105379 | -29497.4293105379
    (15 rows)
    
  7. Compare predicted price with actual with grouping. It means a different model is used depending on the number of bedrooms.
    \x OFF
    SELECT houses.*,
           madlib.linregr_predict( m.coef,
                                  ARRAY[1,tax,bath,size]
                                 ) as predict,
            price -
              madlib.linregr_predict( m.coef,
                                     ARRAY[1,tax,bath,size]
                                    ) as residual
    FROM houses, houses_linregr_bedroom m
    WHERE houses.bedroom = m.bedroom
    ORDER BY id;
    
    Result:
     id | tax  | bedroom | bath | price  | size |  lot  |     predict      |     residual
    ----+------+---------+------+--------+------+-------+------------------+-------------------
      1 |  590 |       2 |    1 |  50000 |  770 | 22100 | 43223.5393423978 |  6776.46065760222
      2 | 1050 |       3 |    2 |  85000 | 1410 | 12000 | 111527.609949684 |  -26527.609949684
      3 |   20 |       3 |    1 |  22500 | 1060 |  3500 | 20187.9052986341 |  2312.09470136587
      4 |  870 |       2 |    2 |  90000 | 1300 | 17500 | 99354.9203362612 | -9354.92033626116
      5 | 1320 |       3 |    2 | 133000 | 1500 | 30000 | 124508.080626412 |  8491.91937358756
      6 | 1350 |       2 |    1 |  90500 |  820 | 25700 | 96640.8258367579 |  -6140.8258367579
      7 | 2790 |       3 |  2.5 | 260000 | 2130 | 25000 | 224650.799707327 |  35349.2002926733
      8 |  680 |       2 |    1 | 142500 | 1170 | 22000 | 138458.174652714 |  4041.82534728572
      9 | 1840 |       3 |    2 | 160000 | 1500 | 19000 | 138650.335313722 |  21349.6646862777
     10 | 3680 |       4 |    2 | 240000 | 2790 | 20000 |           240000 |                 0
     11 | 1660 |       3 |    1 |  87000 | 1030 | 17500 | 62911.2752186594 |  24088.7247813406
     12 | 1620 |       3 |    2 | 118600 | 1250 | 20000 | 117007.693446414 |  1592.30655358579
     13 | 3100 |       3 |    2 | 140000 | 1760 | 38000 | 189203.861766403 | -49203.8617664034
     14 | 2070 |       2 |    3 | 148000 | 1550 | 14000 | 143322.539831869 |  4677.46016813093
     15 |  650 |       3 |  1.5 |  65000 | 1450 | 12000 | 82452.4386727394 | -17452.4386727394
    (15 rows)
    

Note
All table names can be optionally schema qualified (current_schemas() would be searched if a schema name is not provided) and all table and column names should follow case-sensitivity and quoting rules per the database. (For instance, 'mytable' and 'MyTable' both resolve to the same entity, i.e. 'mytable'. If mixed-case or multi-byte characters are desired for entity names then the string should be double-quoted; in this case the input would be '"MyTable"').

Technical Background

Ordinary least-squares (OLS) linear regression refers to a stochastic model in which the conditional mean of the dependent variable (usually denoted \( Y \)) is an affine function of the vector of independent variables (usually denoted \( \boldsymbol x \)). That is,

\[ E[Y \mid \boldsymbol x] = \boldsymbol c^T \boldsymbol x \]

for some unknown vector of coefficients \( \boldsymbol c \). The assumption is that the residuals are i.i.d. distributed Gaussians. That is, the (conditional) probability density of \( Y \) is given by

\[ f(y \mid \boldsymbol x) = \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot \exp\left(-\frac{1}{2 \sigma^2} \cdot (y - \boldsymbol x^T \boldsymbol c)^2 \right) \,. \]

OLS linear regression finds the vector of coefficients \( \boldsymbol c \) that maximizes the likelihood of the observations.

Let

Maximizing the likelihood is equivalent to maximizing the log-likelihood \( \sum_{i=1}^n \log f(y_i \mid \boldsymbol x_i) \), which simplifies to minimizing the residual sum of squares \( RSS \) (also called sum of squared residuals or sum of squared errors of prediction),

\[ RSS = \sum_{i=1}^n ( y_i - \boldsymbol c^T \boldsymbol x_i )^2 = (\boldsymbol y - X \boldsymbol c)^T (\boldsymbol y - X \boldsymbol c) \,. \]

The first-order conditions yield that the \( RSS \) is minimized at

\[ \boldsymbol c = (X^T X)^+ X^T \boldsymbol y \,. \]

Computing the total sum of squares \( TSS \), the explained sum of squares \( ESS \) (also called the regression sum of squares), and the coefficient of determination \( R^2 \) is done according to the following formulas:

\begin{align*} ESS & = \boldsymbol y^T X \boldsymbol c - \frac{ \| y \|_1^2 }{n} \\ TSS & = \sum_{i=1}^n y_i^2 - \frac{ \| y \|_1^2 }{n} \\ R^2 & = \frac{ESS}{TSS} \end{align*}

Note: The last equality follows from the definition \( R^2 = 1 - \frac{RSS}{TSS} \) and the fact that for linear regression \( TSS = RSS + ESS \). A proof of the latter can be found, e.g., at: http://en.wikipedia.org/wiki/Sum_of_squares

We estimate the variance \( Var[Y - \boldsymbol c^T \boldsymbol x \mid \boldsymbol x] \) as

\[ \sigma^2 = \frac{RSS}{n - k} \]

and compute the t-statistic for coefficient \( i \) as

\[ t_i = \frac{c_i}{\sqrt{\sigma^2 \cdot \left( (X^T X)^{-1} \right)_{ii} }} \,. \]

The \( p \)-value for coefficient \( i \) gives the probability of seeing a value at least as extreme as the one observed, provided that the null hypothesis ( \( c_i = 0 \)) is true. Letting \( F_\nu \) denote the cumulative density function of student-t with \( \nu \) degrees of freedom, the \( p \)-value for coefficient \( i \) is therefore

\[ p_i = \Pr(|T| \geq |t_i|) = 2 \cdot (1 - F_{n - k}( |t_i| )) \]

where \( T \) is a student-t distributed random variable with mean 0.

The condition number [2] \( \kappa(X) = \|X\|_2\cdot\|X^{-1}\|_2\) is computed as the product of two spectral norms [3]. The spectral norm of a matrix \(X\) is the largest singular value of \(X\) i.e. the square root of the largest eigenvalue of the positive-semidefinite matrix \(X^{*}X\):

\[ \|X\|_2 = \sqrt{\lambda_{\max}\left(X^{*}X\right)}\ , \]

where \(X^{*}\) is the conjugate transpose of \(X\). The condition number of a linear regression problem is a worst-case measure of how sensitive the result is to small perturbations of the input. A large condition number (say, more than 1000) indicates the presence of significant multicollinearity.

Literature

[1] Cosma Shalizi: Statistics 36-350: Data Mining, Lecture Notes, 21 October 2009, http://www.stat.cmu.edu/~cshalizi/350/lectures/17/lecture-17.pdf

[2] Wikipedia: Condition Number, http://en.wikipedia.org/wiki/Condition_number.

[3] Wikipedia: Spectral Norm, http://en.wikipedia.org/wiki/Spectral_norm#Spectral_norm

[4] Wikipedia: Breusch–Pagan test, http://en.wikipedia.org/wiki/Breusch%E2%80%93Pagan_test

[5] Wikipedia: Heteroscedasticity-consistent standard errors, http://en.wikipedia.org/wiki/Heteroscedasticity-consistent_standard_errors

Related Topics

Robust Variance

Clustered Variance

Cross Validation

File linear.sql_in, source file for the SQL functions