2.1.0
User Documentation for Apache MADlib

The Clustered Variance module adjusts standard errors for clustering. For example, replicating a dataset 100 times should not increase the precision of parameter estimates, but performing this procedure with the IID assumption will actually do this. Another example is in economics of education research, it is reasonable to expect that the error terms for children in the same class are not independent. Clustering standard errors can correct for this.

The MADlib Clustered Variance module includes functions to calculate linear, logistic, and multinomial logistic regression problems.

Clustered Variance Linear Regression Training Function

The clustered variance linear regression training function has the following syntax.

clustered_variance_linregr ( source_table,
                             out_table,
                             dependent_varname,
                             independent_varname,
                             clustervar,
                             grouping_cols
                           )

Arguments

source_table

TEXT. The name of the table containing the input data.

out_table

VARCHAR. Name of the generated table containing the output model. The output table contains the following columns.

coef DOUBLE PRECISION[]. Vector of the coefficients of the regression.
std_err DOUBLE PRECISION[]. Vector of the standard error of the coefficients.
t_stats DOUBLE PRECISION[]. Vector of the t-stats of the coefficients.
p_values DOUBLE PRECISION[]. Vector of the p-values of the coefficients.

A summary table named <out_table>_summary is also created, which is the same as the summary table created by linregr_train function. Please refer to the documentation for linear regression for details.

dependent_varname
TEXT. An expression to evaluate for the dependent variable.
independent_varname
TEXT. An Expression to evalue for the independent variables.
clustervar
TEXT. A comma-separated list of the columns to use as cluster variables.
grouping_cols (optional)
TEXT, default: NULL. Not currently implemented. Any non-NULL value is ignored. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL GROUP BY clause. When this value is null, no grouping is used and a single result model is generated.

Clustered Variance Logistic Regression Training Function

The clustered variance logistic regression training function has the following syntax.

clustered_variance_logregr( source_table,
                            out_table,
                            dependent_varname,
                            independent_varname,
                            clustervar,
                            grouping_cols,
                            max_iter,
                            optimizer,
                            tolerance,
                            verbose_mode
                          )

Arguments

source_table
TEXT. The name of the table containing the input data.
out_table

VARCHAR. Name of the generated table containing the output model. The output table has the following columns:

coef Vector of the coefficients of the regression.
std_err Vector of the standard error of the coefficients.
z_stats Vector of the z-stats of the coefficients.
p_values Vector of the p-values of the coefficients.

A summary table named <out_table>_summary is also created, which is the same as the summary table created by logregr_train function. Please refer to the documentation for logistic regression for details.

dependent_varname
TEXT. An expression to evaluate for the dependent variable.
independent_varname
TEXT. An expression to evaluate for the independent variable.
clustervar
TEXT. A comma-separated list of columns to use as cluster variables.
grouping_cols (optional)
TEXT, default: NULL. Not yet implemented. Any non-NULL values are ignored. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL GROUP BY clause. When this value is NULL, no grouping is used and a single result model is generated.
max_iter (optional)
INTEGER, default: 20. The maximum number of iterations that are allowed.
optimizer (optional)
TEXT, default: 'irls'. The name of the optimizer to use:
  • 'newton' or 'irls': Iteratively reweighted least squares
  • 'cg': conjugate gradient
  • 'igd': incremental gradient descent.
tolerance (optional)
FLOAT8, default: 0.0001 The difference between log-likelihood values in successive iterations that should indicate convergence. A zero disables the convergence criterion, so that execution stops after n Iterations have completed.
verbose_mode (optional)
BOOLEAN, default FALSE. Provides verbose_mode output of the results of training.

Clustered Variance Multinomial Logistic Regression Training Function
clustered_variance_mlogregr( source_table,
                             out_table,
                             dependent_varname,
                             independent_varname,
                             cluster_varname,
                             ref_category,
                             grouping_cols,
                             optimizer_params,
                             verbose_mode
                           )

Arguments

source_table
TEXT. The name of the table containing the input data.
out_table

TEXT. The name of the table where the regression model will be stored. The output table has the following columns:

category The category.
ref_category The refererence category used for modeling.
coef Vector of the coefficients of the regression.
std_err Vector of the standard error of the coefficients.
z_stats Vector of the z-stats of the coefficients.
p_values Vector of the p-values of the coefficients.

A summary table named <out_table>_summary is also created, which is the same as the summary table created by mlogregr_train function. Please refer to the documentation for multinomial logistic regression for details.

dependent_varname
TEXT. An expression to evaluate for the dependent variable.
independent_varname
TEXT. An expression to evaluate for the independent variable.
cluster_varname
TEXT. A comma-separated list of columns to use as cluster variables.
ref_category (optional)
INTEGER. Reference category in the range [0, num_category).
groupingvarng_cols (optional)
TEXT, default: NULL. Not yet implemented. Any non-NULL values are ignored. A comma-separated list of columns to use as grouping variables.
optimizer_params (optional)
TEXT, default: NULL, which uses the default values of optimizer parameters: max_iter=20, optimizer='newton', tolerance=1e-4. It should be a string that contains pairs of 'key=value' separated by commas.
verbose_mode (optional)
BOOLEAN, default FALSE. If TRUE, detailed information is printed when computing logistic regression.

Clustered Variance for Cox Proportional Hazards model

The clustered robust variance estimator function for the Cox Proportional Hazards model has the following syntax.

clustered_variance_coxph(model_table, output_table, clustervar)

Arguments

model_table
TEXT. The name of the model table, which is exactaly the same as the 'output_table' parameter of coxph_train() function.
output_table
TEXT. The name of the table where the output is saved. It has the following columns:
coef FLOAT8[]. Vector of the coefficients.
loglikelihood FLOAT8. Log-likelihood value of the MLE estimate.
std_err FLOAT8[]. Vector of the standard error of the coefficients.
clustervar TEXT. A comma-separated list of columns to use as cluster variables.
clustered_se FLOAT8[]. Vector of the robust standard errors of the coefficients.
clustered_z FLOAT8[]. Vector of the robust z-stats of the coefficients.
clustered_p FLOAT8[]. Vector of the robust p-values of the coefficients.
hessian FLOAT8[]. The Hessian matrix.
clustervar
TEXT. A comma-separated list of columns to use as cluster variables.

Examples
  1. Create a testing data table
    CREATE TABLE abalone (
        id integer,
        sex text,
        length double precision,
        diameter double precision,
        height double precision,
        whole double precision,
        shucked double precision,
        viscera double precision,
        shell double precision,
        rings integer
    );
    INSERT INTO abalone VALUES
    (3151, 'F', 0.655000000000000027, 0.505000000000000004, 0.165000000000000008, 1.36699999999999999, 0.583500000000000019, 0.351499999999999979, 0.396000000000000019, 10),
    (2026, 'F', 0.550000000000000044, 0.469999999999999973, 0.149999999999999994, 0.920499999999999985, 0.381000000000000005, 0.243499999999999994, 0.267500000000000016, 10),
    (3751, 'I', 0.434999999999999998, 0.375, 0.110000000000000001, 0.41549999999999998, 0.170000000000000012, 0.0759999999999999981, 0.14499999999999999, 8),
    (720, 'I', 0.149999999999999994, 0.100000000000000006, 0.0250000000000000014, 0.0149999999999999994, 0.00449999999999999966, 0.00400000000000000008, 0.0050000000000000001, 2),
    (1635, 'F', 0.574999999999999956, 0.469999999999999973, 0.154999999999999999, 1.1160000000000001, 0.509000000000000008, 0.237999999999999989, 0.340000000000000024, 10),
    (2648, 'I', 0.5, 0.390000000000000013, 0.125, 0.582999999999999963, 0.293999999999999984, 0.132000000000000006, 0.160500000000000004, 8),
    (1796, 'F', 0.57999999999999996, 0.429999999999999993, 0.170000000000000012, 1.47999999999999998, 0.65349999999999997, 0.32400000000000001, 0.41549999999999998, 10),
    (209, 'F', 0.525000000000000022, 0.41499999999999998, 0.170000000000000012, 0.832500000000000018, 0.275500000000000023, 0.168500000000000011, 0.309999999999999998, 13),
    (1451, 'I', 0.455000000000000016, 0.33500000000000002, 0.135000000000000009, 0.501000000000000001, 0.274000000000000021, 0.0995000000000000051, 0.106499999999999997, 7),
    (1108, 'I', 0.510000000000000009, 0.380000000000000004, 0.115000000000000005, 0.515499999999999958, 0.214999999999999997, 0.113500000000000004, 0.166000000000000009, 8),
    (3675, 'F', 0.594999999999999973, 0.450000000000000011, 0.165000000000000008, 1.08099999999999996, 0.489999999999999991, 0.252500000000000002, 0.279000000000000026, 12),
    (2108, 'F', 0.675000000000000044, 0.550000000000000044, 0.179999999999999993, 1.68849999999999989, 0.562000000000000055, 0.370499999999999996, 0.599999999999999978, 15),
    (3312, 'F', 0.479999999999999982, 0.380000000000000004, 0.135000000000000009, 0.507000000000000006, 0.191500000000000004, 0.13650000000000001, 0.154999999999999999, 12),
    (882, 'M', 0.655000000000000027, 0.520000000000000018, 0.165000000000000008, 1.40949999999999998, 0.585999999999999965, 0.290999999999999981, 0.405000000000000027, 9),
    (3402, 'M', 0.479999999999999982, 0.395000000000000018, 0.149999999999999994, 0.681499999999999995, 0.214499999999999996, 0.140500000000000014, 0.2495, 18),
    (829, 'I', 0.409999999999999976, 0.325000000000000011, 0.100000000000000006, 0.394000000000000017, 0.20799999999999999, 0.0655000000000000027, 0.105999999999999997, 6),
    (1305, 'M', 0.535000000000000031, 0.434999999999999998, 0.149999999999999994, 0.716999999999999971, 0.347499999999999976, 0.14449999999999999, 0.194000000000000006, 9),
    (3613, 'M', 0.599999999999999978, 0.46000000000000002, 0.179999999999999993, 1.1399999999999999, 0.422999999999999987, 0.257500000000000007, 0.364999999999999991, 10),
    (1068, 'I', 0.340000000000000024, 0.265000000000000013, 0.0800000000000000017, 0.201500000000000012, 0.0899999999999999967, 0.0475000000000000006, 0.0550000000000000003, 5),
    (2446, 'M', 0.5, 0.380000000000000004, 0.135000000000000009, 0.583500000000000019, 0.22950000000000001, 0.126500000000000001, 0.179999999999999993, 12),
    (1393, 'M', 0.635000000000000009, 0.474999999999999978, 0.170000000000000012, 1.19350000000000001, 0.520499999999999963, 0.269500000000000017, 0.366499999999999992, 10),
    (359, 'M', 0.744999999999999996, 0.584999999999999964, 0.214999999999999997, 2.49900000000000011, 0.92649999999999999, 0.471999999999999975, 0.699999999999999956, 17),
    (549, 'F', 0.564999999999999947, 0.450000000000000011, 0.160000000000000003, 0.79500000000000004, 0.360499999999999987, 0.155499999999999999, 0.23000000000000001, 12),
    (1154, 'F', 0.599999999999999978, 0.474999999999999978, 0.160000000000000003, 1.02649999999999997, 0.484999999999999987, 0.2495, 0.256500000000000006, 9),
    (1790, 'F', 0.54500000000000004, 0.385000000000000009, 0.149999999999999994, 1.11850000000000005, 0.542499999999999982, 0.244499999999999995, 0.284499999999999975, 9),
    (3703, 'F', 0.665000000000000036, 0.540000000000000036, 0.195000000000000007, 1.76400000000000001, 0.850500000000000034, 0.361499999999999988, 0.469999999999999973, 11),
    (1962, 'F', 0.655000000000000027, 0.515000000000000013, 0.179999999999999993, 1.41199999999999992, 0.619500000000000051, 0.248499999999999999, 0.496999999999999997, 11),
    (1665, 'I', 0.604999999999999982, 0.469999999999999973, 0.14499999999999999, 0.802499999999999991, 0.379000000000000004, 0.226500000000000007, 0.220000000000000001, 9),
    (635, 'M', 0.359999999999999987, 0.294999999999999984, 0.100000000000000006, 0.210499999999999993, 0.0660000000000000031, 0.0524999999999999981, 0.0749999999999999972, 9),
    (3901, 'M', 0.445000000000000007, 0.344999999999999973, 0.140000000000000013, 0.475999999999999979, 0.205499999999999988, 0.101500000000000007, 0.108499999999999999, 15),
    (2734, 'I', 0.41499999999999998, 0.33500000000000002, 0.100000000000000006, 0.357999999999999985, 0.169000000000000011, 0.067000000000000004, 0.104999999999999996, 7),
    (3856, 'M', 0.409999999999999976, 0.33500000000000002, 0.115000000000000005, 0.440500000000000003, 0.190000000000000002, 0.0850000000000000061, 0.135000000000000009, 8),
    (827, 'I', 0.395000000000000018, 0.28999999999999998, 0.0950000000000000011, 0.303999999999999992, 0.127000000000000002, 0.0840000000000000052, 0.076999999999999999, 6),
    (3381, 'I', 0.190000000000000002, 0.130000000000000004, 0.0449999999999999983, 0.0264999999999999993, 0.00899999999999999932, 0.0050000000000000001, 0.00899999999999999932, 5),
    (3972, 'I', 0.400000000000000022, 0.294999999999999984, 0.0950000000000000011, 0.252000000000000002, 0.110500000000000001, 0.0575000000000000025, 0.0660000000000000031, 6),
    (1155, 'M', 0.599999999999999978, 0.455000000000000016, 0.170000000000000012, 1.1915, 0.695999999999999952, 0.239499999999999991, 0.239999999999999991, 8),
    (3467, 'M', 0.640000000000000013, 0.5, 0.170000000000000012, 1.4544999999999999, 0.642000000000000015, 0.357499999999999984, 0.353999999999999981, 9),
    (2433, 'F', 0.609999999999999987, 0.484999999999999987, 0.165000000000000008, 1.08699999999999997, 0.425499999999999989, 0.232000000000000012, 0.380000000000000004, 11),
    (552, 'I', 0.614999999999999991, 0.489999999999999991, 0.154999999999999999, 0.988500000000000045, 0.41449999999999998, 0.195000000000000007, 0.344999999999999973, 13),
    (1425, 'F', 0.729999999999999982, 0.57999999999999996, 0.190000000000000002, 1.73750000000000004, 0.678499999999999992, 0.434499999999999997, 0.520000000000000018, 11),
    (2402, 'F', 0.584999999999999964, 0.41499999999999998, 0.154999999999999999, 0.69850000000000001, 0.299999999999999989, 0.145999999999999991, 0.195000000000000007, 12),
    (1748, 'F', 0.699999999999999956, 0.535000000000000031, 0.174999999999999989, 1.77299999999999991, 0.680499999999999994, 0.479999999999999982, 0.512000000000000011, 15),
    (3983, 'I', 0.57999999999999996, 0.434999999999999998, 0.149999999999999994, 0.891499999999999959, 0.362999999999999989, 0.192500000000000004, 0.251500000000000001, 6),
    (335, 'F', 0.739999999999999991, 0.599999999999999978, 0.195000000000000007, 1.97399999999999998, 0.597999999999999976, 0.408499999999999974, 0.709999999999999964, 16),
    (1587, 'I', 0.515000000000000013, 0.349999999999999978, 0.104999999999999996, 0.474499999999999977, 0.212999999999999995, 0.122999999999999998, 0.127500000000000002, 10),
    (2448, 'I', 0.275000000000000022, 0.204999999999999988, 0.0800000000000000017, 0.096000000000000002, 0.0359999999999999973, 0.0184999999999999991, 0.0299999999999999989, 6),
    (1362, 'F', 0.604999999999999982, 0.474999999999999978, 0.174999999999999989, 1.07600000000000007, 0.463000000000000023, 0.219500000000000001, 0.33500000000000002, 9),
    (2799, 'M', 0.640000000000000013, 0.484999999999999987, 0.149999999999999994, 1.09800000000000009, 0.519499999999999962, 0.222000000000000003, 0.317500000000000004, 10),
    (1413, 'F', 0.67000000000000004, 0.505000000000000004, 0.174999999999999989, 1.01449999999999996, 0.4375, 0.271000000000000019, 0.3745, 10),
    (1739, 'F', 0.67000000000000004, 0.540000000000000036, 0.195000000000000007, 1.61899999999999999, 0.739999999999999991, 0.330500000000000016, 0.465000000000000024, 11),
    (1152, 'M', 0.584999999999999964, 0.465000000000000024, 0.160000000000000003, 0.955500000000000016, 0.45950000000000002, 0.235999999999999988, 0.265000000000000013, 7),
    (2427, 'I', 0.564999999999999947, 0.434999999999999998, 0.154999999999999999, 0.782000000000000028, 0.271500000000000019, 0.16800000000000001, 0.284999999999999976, 14),
    (1777, 'M', 0.484999999999999987, 0.369999999999999996, 0.154999999999999999, 0.967999999999999972, 0.418999999999999984, 0.245499999999999996, 0.236499999999999988, 9),
    (3294, 'M', 0.574999999999999956, 0.455000000000000016, 0.184999999999999998, 1.15599999999999992, 0.552499999999999991, 0.242999999999999994, 0.294999999999999984, 13),
    (1403, 'M', 0.650000000000000022, 0.510000000000000009, 0.190000000000000002, 1.54200000000000004, 0.715500000000000025, 0.373499999999999999, 0.375, 9),
    (2256, 'M', 0.510000000000000009, 0.395000000000000018, 0.14499999999999999, 0.61850000000000005, 0.215999999999999998, 0.138500000000000012, 0.239999999999999991, 12),
    (3984, 'F', 0.584999999999999964, 0.450000000000000011, 0.125, 0.873999999999999999, 0.354499999999999982, 0.20749999999999999, 0.225000000000000006, 6),
    (1116, 'M', 0.525000000000000022, 0.405000000000000027, 0.119999999999999996, 0.755499999999999949, 0.3755, 0.155499999999999999, 0.201000000000000012, 9),
    (1366, 'M', 0.609999999999999987, 0.474999999999999978, 0.170000000000000012, 1.02649999999999997, 0.434999999999999998, 0.233500000000000013, 0.303499999999999992, 10),
    (3759, 'I', 0.525000000000000022, 0.400000000000000022, 0.140000000000000013, 0.605500000000000038, 0.260500000000000009, 0.107999999999999999, 0.209999999999999992, 9);
    
  2. Run the linear regression function and view the results.
    \x on
    DROP TABLE IF EXISTS out_table, out_table_summary;
    SELECT madlib.clustered_variance_linregr( 'abalone',
                                              'out_table',
                                              'rings',
                                              'ARRAY[1, diameter, length, height]',
                                              'sex',
                                              NULL
                                            );
    SELECT * FROM out_table;
    
    Result:
    -[ RECORD 1 ]-----------------------------------------------------------------------
    coef     | {2.53526184512177,14.1959262629025,-17.4142205261305,73.9536825412142}
    std_err  | {2.08204036310278,10.1218601277935,16.350795118006,17.7971852600971}
    t_stats  | {1.21768141004893,1.40250172237829,-1.06503814649071,4.15535835922465}
    p_values | {0.22845116414893,0.166285056923658,0.2914293364465,0.000112184340238519}
    
  3. Run the logistic regression function and view the results.
    DROP TABLE IF EXISTS out_table, out_table_summary;
    SELECT madlib.clustered_variance_logregr( 'abalone',
                                              'out_table',
                                              'rings < 10',
                                              'ARRAY[1, diameter, length, height]',
                                              'sex'
                                            );
    SELECT * FROM out_table;
    
    Result:
    -[ RECORD 1 ]----------------------------------------------------------------------------
    coef     | {7.03525620439852,5.16355730320515,-4.03125518391448,-47.5439002903374}
    std_err  | {2.69860857119167,21.4303882155136,16.6528594816461,5.89094595954187}
    z_stats  | {2.60699394476904,0.240945579299736,-0.242075854201348,-8.0706733038907}
    p_values | {0.00913409755638422,0.809597295390548,0.808721387408619,6.99115526001629e-16}
    
  4. Run the multinomial logistic regression and view the results.
    DROP TABLE IF EXISTS out_table, out_table_summary;
    SELECT madlib.clustered_variance_mlogregr( 'abalone',
                                               'out_table',
                                               'CASE WHEN rings < 10 THEN 1 ELSE 0 END',
                                               'ARRAY[1, diameter, length, height]',
                                               'sex',
                                               0
                                             );
    SELECT * FROM out_table;
    
    Result:
    -[ RECORD 1 ]+-------------------------------------------------------------------------------
    category     | 1
    ref_category | 0
    coef         | {7.03525620439846,5.16355730320138,-4.03125518391122,-47.5439002903385}
    std_err      | {2.69860857119169,21.4303882155156,16.6528594816446,5.89094595954797}
    z_stats      | {2.606993944769,0.240945579299537,-0.242075854201173,-8.07067330388254}
    p_values     | {0.00913409755638535,0.809597295390702,0.808721387408754,6.99115526048361e-16}
    
  5. Run the Cox Proportional Hazards regression and compute the clustered robust estimator.
    DROP TABLE IF EXISTS sample_data;
    CREATE TABLE sample_data (
        id INTEGER NOT NULL,
        grp DOUBLE PRECISION,
        wbc DOUBLE PRECISION,
        timedeath INTEGER,
        status BOOLEAN,
        sex TEXT
    );
    COPY sample_data FROM STDIN WITH DELIMITER '|';
      0 |   0 | 1.45 |        35 | t |   'M'
      1 |   0 | 1.47 |        34 | t |   'M'
      3 |   0 |  2.2 |        32 | t |   'M'
      4 |   0 | 1.78 |        25 | t |   'M'
      5 |   0 | 2.57 |        23 | t |   'M'
      6 |   0 | 2.32 |        22 | t |   'M'
      7 |   0 | 2.01 |        20 | t |   'M'
      8 |   0 | 2.05 |        19 | t |   'M'
      9 |   0 | 2.16 |        17 | t |   'M'
     10 |   0 |  3.6 |        16 | t |   'M'
     11 |   1 |  2.3 |        15 | t |   'M'
     12 |   0 | 2.88 |        13 | t |   'I'
     13 |   1 |  1.5 |        12 | t |   'I'
     14 |   0 |  2.6 |        11 | t |   'I'
     15 |   0 |  2.7 |        10 | t |   'I'
     16 |   0 |  2.8 |         9 | t |   'I'
     17 |   1 | 2.32 |         8 | t |   'F'
     18 |   0 | 4.43 |         7 | t |   'F'
     19 |   0 | 2.31 |         6 | t |   'F'
     20 |   1 | 3.49 |         5 | t |   'F'
     21 |   1 | 2.42 |         4 | t |   'F'
     22 |   1 | 4.01 |         3 | t |   'F'
     23 |   1 | 4.91 |         2 | t |   'F'
     24 |   1 |    5 |         1 | t |   'F'
    \.
    DROP TABLE IF EXISTS sample_cox, sample_cox_summary, sample_cox_cl;
    SELECT madlib.coxph_train( 'sample_data',
                               'sample_cox',
                               'timedeath',
                               'ARRAY[grp,wbc]',
                               'status'
                             );
    SELECT madlib.clustered_variance_coxph('sample_cox',
                                           'sample_cox_cl',
                                           'sex');
    SELECT * FROM sample_cox_cl;
    
    Result:
    -[ RECORD 1 ]-+----------------------------------------------------------------------------
    coef          | {2.54407073265254,1.67172094779487}
    loglikelihood | -37.8532498733
    std_err       | {0.677180599295183,0.387195514577697}
    clustervar    | sex
    clustered_se  | {0.545274710867954,0.228046806400425}
    clustered_z   | {4.6656679320465,7.33060451133666}
    clustered_p   | {3.07616143241047e-06,2.29116873819977e-13}
    hessian       | {{2.78043065745617,-2.25848560642761},{-2.25848560642761,8.50472838284472}}
    
    Notes

Technical Background

Assume that the data can be separated into \(m\) clusters. Usually this can be done by grouping the data table according to one or multiple columns.

The estimator has a similar form to the usual sandwich estimator

\[ S(\vec{c}) = B(\vec{c}) M(\vec{c}) B(\vec{c}) \]

The bread part is the same as Huber-White sandwich estimator

\begin{eqnarray} B(\vec{c}) & = & \left(-\sum_{i=1}^{n} H(y_i, \vec{x}_i, \vec{c})\right)^{-1}\\ & = & \left(-\sum_{i=1}^{n}\frac{\partial^2 l(y_i, \vec{x}_i, \vec{c})}{\partial c_\alpha \partial c_\beta}\right)^{-1} \end{eqnarray}

where \(H\) is the hessian matrix, which is the second derivative of the target function

\[ L(\vec{c}) = \sum_{i=1}^n l(y_i, \vec{x}_i, \vec{c})\ . \]

The meat part is different

\[ M(\vec{c}) = \bf{A}^T\bf{A} \]

where the \(m\)-th row of \(\bf{A}\) is

\[ A_m = \sum_{i\in G_m}\frac{\partial l(y_i,\vec{x}_i,\vec{c})}{\partial \vec{c}} \]

where \(G_m\) is the set of rows that belong to the same cluster.

We can compute the quantities of \(B\) and \(A\) for each cluster during one scan through the data table in an aggregate function. Then sum over all clusters to the full \(B\) and \(A\) in the outside of the aggregate function. At last, the matrix mulplitications are done in a separate function on the master node.

When multinomial logistic regression is computed before the multinomial clustered variance calculation, it uses a default reference category of zero and the regression coefficients are included in the output table. The regression coefficients in the output are in the same order as multinomial logistic regression function, which is described below. For a problem with \( K \) dependent variables \( (1, ..., K) \) and \( J \) categories \( (0, ..., J-1) \), let \( {m_{k,j}} \) denote the coefficient for dependent variable \( k \) and category \( j \). The output is \( {m_{k_1, j_0}, m_{k_1, j_1} \ldots m_{k_1, j_{J-1}}, m_{k_2, j_0}, m_{k_2, j_1} \ldots m_{k_K, j_{J-1}}} \). The order is NOT CONSISTENT with the multinomial regression marginal effect calculation with function marginal_mlogregr. This is deliberate because the interfaces of all multinomial regressions (robust, clustered, ...) will be moved to match that used in marginal.

Literature

[1] Standard, Robust, and Clustered Standard Errors Computed in R, http://diffuseprior.wordpress.com/2012/06/15/standard-robust-and-clustered-standard-errors-computed-in-r/

Related Topics
File clustered_variance.sql_in documenting the clustered variance SQL functions.

File clustered_variance_coxph.sql_in documenting the clustered variance for Cox proportional hazards SQL functions.