Multinomial Regression

In statistics, multinomial regression is a classification method that generalizes binomial regression to multiclass problems, i.e. with more than two possible discrete outcomes. That is, it is a model that is used to predict the probabilities of the different possible outcomes of a categorically distributed dependent variable, given a set of independent variables (which may be real-valued, binary-valued, categorical-valued, etc.).

- Training Function
- The multinomial regression training function has the following syntax:
multinom(source_table, model_table, dependent_varname, independent_varname, ref_category, link_func, grouping_col, optim_params, verbose )

**Arguments**

- source_table
VARCHAR. Name of the table containing the training data.

- model_table
VARCHAR. Name of the generated table containing the model.

The model table produced by multinom() contains the following columns:

<...> Grouping columns, if provided in input. This could be multiple columns depending on the

`grouping_col`

input.category VARCHAR. String representation of category value.

coef FLOAT8[]. Vector of the coefficients in linear predictor.

log_likelihood FLOAT8. The log-likelihood \( l(\boldsymbol \beta) \). The value will be the same across categories within the same group.

std_err FLOAT8[]. Vector of the standard errors of the coefficients.

z_stats FLOAT8[]. Vector of the z-statistics of the coefficients.

p_values FLOAT8[]. Vector of the p-values of the coefficients.

num_rows_processed BIGINT. Number of rows processed.

num_rows_skipped BIGINT. Number of rows skipped due to missing values or failures.

num_iterations INTEGER. Number of iterations actually completed. This would be different from the `nIterations`

argument if a`tolerance`

parameter is provided and the algorithm converges before all iterations are completed.A summary table named <model_table>_summary is also created at the same time, which has the following columns:

method VARCHAR. String describes the model: 'multinom'.

source_table VARCHAR. Data source table name.

model_table VARCHAR. Model table name.

dependent_varname VARCHAR. Expression for dependent variable.

independent_varname VARCHAR. Expression for independent variables.

ref_category VARCHAR. String representation of reference category.

link_func VARCHAR. String that contains link function parameters: only 'logit' is implemented now

grouping_col VARCHAR. String representation of grouping columns.

optimizer_params VARCHAR. String that contains optimizer parameters, and has the form of 'optimizer=..., max_iter=..., tolerance=...'.

num_all_groups INTEGER. Number of groups in glm training.

num_failed_groups INTEGER. Number of failed groups in glm training.

total_rows_processed BIGINT. Total number of rows processed in all groups.

total_rows_skipped BIGINT. Total number of rows skipped in all groups due to missing values or failures.

- dependent_varname
VARCHAR. Name of the dependent variable column.

- independent_varname
VARCHAR. Expression list to evaluate for the independent variables. An intercept variable is not assumed. It is common to provide an explicit intercept term by including a single constant

`1`

term in the independent variable list.- link_function (optional)
VARCHAR, default: 'logit'. Parameters for link function. Currently, we support logit.

- ref_category (optional)
VARCHAR, default: '0'. Parameters to specify the reference category.

- grouping_col (optional)
VARCHAR, default: NULL. An expression list used to group the input dataset into discrete groups, running one regression per group. Similar to the SQL "GROUP BY" clause. When this value is NULL, no grouping is used and a single model is generated.

- optim_params (optional)
VARCHAR, default: 'max_iter=100,optimizer=irls,tolerance=1e-6'. Parameters for optimizer. Currently, we support tolerance=[tolerance for relative error between log-likelihoods], max_iter=[maximum iterations to run], optimizer=irls.

- verbose (optional)
- BOOLEAN, default: FALSE. Provides verbose output of the results of training.

- Note
- For p-values, we just return the computation result directly. Other statistical packages, like 'R', produce the same result, but on printing the result to screen, another format function is used and any p-value that is smaller than the machine epsilon (the smallest positive floating-point number 'x' such that '1 + x != 1') will be printed on screen as "< xxx" (xxx is the value of the machine epsilon). Although the result may look different, they are in fact the same.

- Prediction Function
- Multinomial regression prediction function has the following format:
multinom_predict(model_table, predict_table_input, output_table, predict_type, verbose, id_column )

**Arguments**- model_table
TEXT. Name of the generated table containing the model, which is the output table from multinom().

- predict_table_input
TEXT. The name of the table containing the data to predict on. The table must contain id column as the primary key.

- output_table
TEXT. Name of the generated table containing the predicted values.

The model table produced by multinom_predict contains the following columns:

id SERIAL. Column to identify the predicted value.

category TEXT. Available if the predicted type = 'response'. Column contains the predicted categories

category_value FLOAT8. The predicted probability for the specific category_value. - predict_type
- TEXT. Either 'response' or 'probability'. Using 'response' will give the predicted category with the largest probability. Using probability will give the predicted probabilities for all categories
- verbose
BOOLEAN. Control whether verbose is displayed. The default is FALSE.

- id_column
- TEXT. The name of the column in the input table.

- Examples

- Create the training data table.
DROP TABLE IF EXISTS test3; CREATE TABLE test3 ( feat1 INTEGER, feat2 INTEGER, cat INTEGER ); INSERT INTO test3(feat1, feat2, cat) VALUES (1,35,1), (2,33,0), (3,39,1), (1,37,1), (2,31,1), (3,36,0), (2,36,1), (2,31,1), (2,41,1), (2,37,1), (1,44,1), (3,33,2), (1,31,1), (2,44,1), (1,35,1), (1,44,0), (1,46,0), (2,46,1), (2,46,2), (3,49,1), (2,39,0), (2,44,1), (1,47,1), (1,44,1), (1,37,2), (3,38,2), (1,49,0), (2,44,0), (3,61,2), (1,65,2), (3,67,1), (3,65,2), (1,65,2), (2,67,2), (1,65,2), (1,62,2), (3,52,2), (3,63,2), (2,59,2), (3,65,2), (2,59,0), (3,67,2), (3,67,2), (3,60,2), (3,67,2), (3,62,2), (2,54,2), (3,65,2), (3,62,2), (2,59,2), (3,60,2), (3,63,2), (3,65,2), (2,63,1), (2,67,2), (2,65,2), (2,62,2);

- Run the multilogistic regression function.
DROP TABLE IF EXISTS test3_output; DROP TABLE IF EXISTS test3_output_summary; SELECT madlib.multinom('test3', 'test3_output', 'cat', 'ARRAY[1, feat1, feat2]', '0', 'logit' );

- View the regression results.
-- Set extended display on for easier reading of output \x on SELECT * FROM test3_output;

Result:-[ RECORD 1 ]------+------------------------------------------------------------ category | 1 coef | {1.45474045165731,0.084995618282504,-0.0172383499512136} log_likelihood | -39.1475993094045 std_err | {2.13085878785549,0.585023211942952,0.0431489262260687} z_stats | {0.682701481650677,0.145285890452484,-0.399508202380224} p_values | {0.494795493298706,0.884485154314181,0.689518781152604} num_rows_processed | 57 num_rows_skipped | 0 iteration | 6 -[ RECORD 2 ]------+------------------------------------------------------------ category | 2 coef | {-7.1290816775109,0.876487877074751,0.127886153038661} log_likelihood | -39.1475993094045 std_err | {2.52105418324135,0.639578886139654,0.0445760103748678} z_stats | {-2.82781771407425,1.37041402721253,2.86894569440347} p_values | {0.00468664844488755,0.170557695812408,0.00411842502754068} num_rows_processed | 57 num_rows_skipped | 0 iteration | 6

- Predicting dependent variable using multinomial model. (This example uses the original data table to perform the prediction. Typically a different test dataset with the same features as the original training dataset would be used for prediction.)
\x off -- Add the id column for prediction function ALTER TABLE test3 ADD COLUMN id SERIAL; -- Predict probabilities for all categories using the original data SELECT madlib.multinom_predict('test3_output','test3', 'test3_prd_prob', 'probability'); -- Display the predicted value SELECT * FROM test3_prd_prob;

- Technical Background
- When link = 'logit', multinomial logistic regression models the outcomes of categorical dependent random variables (denoted \( Y \in \{ 0,1,2 \ldots k \} \)). The model assumes that the conditional mean of the dependent categorical variables is the logistic function of an affine combination of independent variables (usually denoted \( \boldsymbol x \)). That is,
\[ E[Y \mid \boldsymbol x] = \sigma(\boldsymbol c^T \boldsymbol x) \]

for some unknown vector of coefficients \( \boldsymbol c \) and where \( \sigma(x) = \frac{1}{1 + \exp(-x)} \) is the logistic function. Multinomial logistic regression finds the vector of coefficients \( \boldsymbol c \) that maximizes the likelihood of the observations.

Let

- \( \boldsymbol y \in \{ 0,1 \}^{n \times k} \) denote the vector of observed dependent variables, with \( n \) rows and \( k \) columns, containing the observed values of the dependent variable,
- \( X \in \mathbf R^{n \times k} \) denote the design matrix with \( k \) columns and \( n \) rows, containing all observed vectors of independent variables \( \boldsymbol x_i \) as rows.

By definition,

\[ P[Y = y_i | \boldsymbol x_i] = \sigma((-1)^{y_i} \cdot \boldsymbol c^T \boldsymbol x_i) \,. \]

Maximizing the likelihood \( \prod_{i=1}^n \Pr(Y = y_i \mid \boldsymbol x_i) \) is equivalent to maximizing the log-likelihood \( \sum_{i=1}^n \log \Pr(Y = y_i \mid \boldsymbol x_i) \), which simplifies to

\[ l(\boldsymbol c) = -\sum_{i=1}^n \log(1 + \exp((-1)^{y_i} \cdot \boldsymbol c^T \boldsymbol x_i)) \,. \]

The Hessian of this objective is \( H = -X^T A X \) where \( A = \text{diag}(a_1, \dots, a_n) \) is the diagonal matrix with \( a_i = \sigma(\boldsymbol c^T \boldsymbol x) \cdot \sigma(-\boldsymbol c^T \boldsymbol x) \,. \) Since \( H \) is non-positive definite, \( l(\boldsymbol c) \) is convex. There are many techniques for solving convex optimization problems. Currently, logistic regression in MADlib can use:

- Iteratively Reweighted Least Squares

We estimate the standard error for coefficient \( i \) as

\[ \mathit{se}(c_i) = \left( (X^T A X)^{-1} \right)_{ii} \,. \]

The Wald z-statistic is

\[ z_i = \frac{c_i}{\mathit{se}(c_i)} \,. \]

The Wald \( p \)-value for coefficient \( i \) gives the probability (under the assumptions inherent in the Wald test) of seeing a value at least as extreme as the one observed, provided that the null hypothesis ( \( c_i = 0 \)) is true. Letting \( F \) denote the cumulative density function of a standard normal distribution, the Wald \( p \)-value for coefficient \( i \) is therefore

\[ p_i = \Pr(|Z| \geq |z_i|) = 2 \cdot (1 - F( |z_i| )) \]

where \( Z \) is a standard normally distributed random variable.

The odds ratio for coefficient \( i \) is estimated as \( \exp(c_i) \).

The condition number is computed as \( \kappa(X^T A X) \) during the iteration immediately *preceding* convergence (i.e., \( A \) is computed using the coefficients of the previous iteration). A large condition number (say, more than 1000) indicates the presence of significant multicollinearity.

The multinomial logistic regression uses a default reference category of zero, and the regression coefficients in the output are in the order described below. For a problem with \( K \) dependent variables \( (1, ..., K) \) and \( J \) categories \( (0, ..., J-1) \), let \( {m_{k,j}} \) denote the coefficient for dependent variable \( k \) and category \( j \). The output is \( {m_{k_1, j_0}, m_{k_1, j_1} \ldots m_{k_1, j_{J-1}}, m_{k_2, j_0}, m_{k_2, j_1}, \ldots m_{k_2, j_{J-1}} \ldots m_{k_K, j_{J-1}}} \). The order is NOT CONSISTENT with the multinomial regression marginal effect calculation with function *marginal_mlogregr*. This is deliberate because the interfaces of all multinomial regressions (robust, clustered, ...) will be moved to match that used in marginal.

- Literature

A collection of nice write-ups, with valuable pointers into further literature:

[1] Annette J. Dobson: An Introduction to Generalized Linear Models, Second Edition. Nov 2001

[2] Cosma Shalizi: Statistics 36-350: Data Mining, Lecture Notes, 18 November 2009, http://www.stat.cmu.edu/~cshalizi/350/lectures/26/lecture-26.pdf

[3] Scott A. Czepiel: Maximum Likelihood Estimation of Logistic Regression Models: Theory and Implementation, Retrieved Jul 12 2012, http://czep.net/stat/mlelr.pdf

- Related Topics

File multiresponseglm.sql_in documenting the multinomial regression functions