1.10.0
User Documentation for MADlib

A decision tree is a supervised learning method that can be used for classification and regression. It consists of a structure in which internal nodes represent tests on attributes, and the branches from nodes represent the result of those tests. Each leaf node is a class label and the paths from root to leaf nodes define the set of classification or regression rules.

Training Function
We implement the decision tree using the CART algorithm, introduced by Breiman et al. [1]. The training function has the following syntax:
tree_train(
    training_table_name,
    output_table_name,
    id_col_name,
    dependent_variable,
    list_of_features,
    list_of_features_to_exclude,
    split_criterion,
    grouping_cols,
    weights,
    max_depth,
    min_split,
    min_bucket,
    num_splits,
    pruning_params,
    surrogate_params,
    verbosity
    )
Arguments
training_table_name

TEXT. The name of the table containing the training data.

output_table_name

TEXT. The name of the generated table containing the model. If a table with the same name already exists, then the function will return an error.

The model table produced by the training function contains the following columns:

<...> Grouping columns, if provided as input, in the same types as the training table. This could be multiple columns depending on the grouping_cols input.
tree BYTEA8. Trained decision tree model stored in a binary format.
cat_levels_in_text TEXT[]. Ordered levels of categorical variables.
cat_n_levels

INTEGER[]. Number of levels for each categorical variable.

tree_depth

INTEGER. The maximum depth the tree obtained after training (root has depth 0).

pruning_cp

DOUBLE PRECISION. The cost-complexity parameter used for pruning the trained tree(s). This would be different from the 'input_cp' value if cross-validation is used.

A summary table named <model_table>_summary is also created at the same time, which has the following columns:

method

TEXT. 'tree_train'

is_classification

BOOLEAN. TRUE if the decision trees are for classification, FALSE if for regression.

source_table

TEXT. The data source table name.

model_table

TEXT. The model table name.

id_col_name

TEXT. The ID column name.

dependent_varname

TEXT. The dependent variable.

independent_varname

TEXT. The independent variables.

cat_features

TEXT. The list of categorical feature names as a comma-separated string.

con_features

TEXT. The list of continuous feature names as a comma-separated string.

grouping_col

TEXT. Names of grouping columns.

num_all_groups

INTEGER. Number of groups in decision tree training.

num_failed_groups

INTEGER. Number of failed groups in decision tree training.

total_rows_processed

BIGINT. Total numbers of rows processed in all groups.

total_rows_skipped

BIGINT. Total numbers of rows skipped in all groups due to missing values or failures.

dependent_var_levels

TEXT. For classification, the distinct levels of the dependent variable.

dependent_var_type

TEXT. The type of dependent variable.

input_cp

DOUBLE PRECISION. The complexity parameter (cp) used for pruning the trained tree(s) before cross-validation is run. This is same as the cp value input using the pruning_params.

independent_var_types

TEXT. A comma separated string for the types of independent variables.

id_col_name

TEXT. Name of the column containing id information in the training data. This is a mandatory argument and is used for prediction and cross-validation. The values are expected to be unique for each row.

dependent_variable

TEXT. Name of the column that contains the output (response) for training. Boolean, integer and text types are considered classification outputs, while double precision values are considered regression outputs. The response variable for a classification tree can be multinomial, but the time and space complexity of the training function increases linearly as the number of response classes increases.

list_of_features

TEXT. Comma-separated string of column names to use as predictors. Can also be a '*' implying all columns are to be used as predictors (except the ones included in the next argument). The types of the features can be mixed where boolean, integer, and text columns are considered categorical and double precision columns are considered continuous. The categorical variables are not encoded and used as is for the training.

There are no limitations on the number of levels in a categorical variable. It is, however, important to note that we don't test for every combination of levels of a categorical variable when evaluating a split. We order the levels of the variable by the entropy of the variable in predicting the response. The split at each node is evaluated between these ordered levels.

list_of_features_to_exclude

TEXT. Comma-separated string of column names to exclude from the predictors list. If the dependent_variable is an expression (including cast of a column name), then this list should include all columns present in the dependent_variable expression, otherwise those columns will be included in the features. The names in this parameter should be identical to the names used in the table and quoted appropriately.

split_criterion

TEXT, default = 'gini' for classification, 'mse' for regression. Impurity function to compute the feature to use for the split. Supported criteria are 'gini', 'entropy', 'misclassification' for classification trees. For regression trees, split_criterion of 'mse' is always used (irrespective of the input for this argument).

grouping_cols (optional)

TEXT, default: NULL. Comma-separated list of column names to group the data by. This will result in multiple decision trees, one for each group.

weights (optional)

TEXT. Column name containing weights for each observation.

max_depth (optional)

INTEGER, default: 10. Maximum depth of any node of the final tree, with the root node counted as depth 0.

min_split (optional)

INTEGER, default: 20. Minimum number of observations that must exist in a node for a split to be attempted. The best value for this parameter depends on the number of tuples in the dataset.

min_bucket (optional)

INTEGER, default: min_split/3. Minimum number of observations in any terminal node. If only one of min_bucket or min_split is specified, min_split is set to min_bucket*3 or min_bucket to min_split/3, as appropriate.

num_splits (optional)

INTEGER, default: 100. Continuous-valued features are binned into discrete quantiles to compute split boundaries. This global parameter is used to compute the resolution of splits for continuous features. Higher number of bins will lead to better prediction, but will also result in longer processing.

pruning_params (optional)

TEXT. Comma-separated string of key-value pairs giving the parameters for pruning the tree. The parameters currently accepted are:

cp

Default: 0. A split on a node is attempted only if it decreases the overall lack of fit by a factor of 'cp', else the split is pruned away. This value is used to create an initial tree before running cross-validation (see below).

n_folds

Default: 0 (i.e. no cross-validation). Number of cross-validation folds to use to compute the best value of cp. To perform cross-validation, a positive value of n_folds (greater than 2) should be given. An additional output table <model_table>_cv is created containing the values of evaluated cp and the cross-validation error. The tree returned in the output table corresponds to the cp with the lowest cross-validation error (we pick the maximum cp if multiple values have same error).

The list of cp values is automatically computed by parsing through the tree initially trained on the complete dataset. The tree output is a subset of this initial tree corresponding to the best computed cp.

surrogate_params

TEXT. Comma-separated string of key-value pairs controlling the behavior of surrogate splits for each node. A surrogate variable is another predictor variable that is associated (correlated) with the primary predictor variable for a split. The surrogate variable comes into use when the primary predictior value is NULL. This parameter currently accepts one argument:

max_surrogates Default: 0. Number of surrogates to store for each node.

verbosity (optional)
BOOLEAN, default: FALSE. Provides verbose output of the training result.
Note
  • Many of the parameters are designed to be similar to the popular R package 'rpart'. An important distinction between rpart and the MADlib function is that for both response and feature variables, MADlib considers integer values as categorical values, while rpart considers them as continuous.
  • When using no surrogates (max_surrogates=0), all rows containing NULL values for any of the features used for training will be ignored from training and prediction.
  • When cross-validation is not used (n_folds=0), each tree output is pruned by the input cost-complextity (cp). With cross-validation, the input cp is the minimum value of all the explored values of 'cp'. During cross-validation, we train an initial tree using the provided cp and explore all possible sub-trees (up to a single-node tree) to compute the optimal sub-tree. The optimal sub-tree and the 'cp' corresponding to this optimal sub-tree is placed in the output_table, with the columns named as tree and pruning_cp respectively.
  • The main parameters that affect memory usage are: depth of tree, number of features, and number of values per feature. If you are hitting VMEM limits, consider reducing one or more of these parameters.

Prediction Function
The prediction function estimates the conditional mean given a new predictor. It has the following syntax:
tree_predict(tree_model,
             new_data_table,
             output_table,
             type)

Arguments

tree_model

TEXT. Name of the table containing the decision tree model. This should be the output table returned from tree_train.

new_data_table

TEXT. Name of the table containing prediction data. This table is expected to contain the same features that were used during training. The table should also contain id_col_name used for identifying each row.

output_table

TEXT. Name of the table to output prediction results. If this table already exists, an error is returned. The table contains the id_col_name column giving the 'id' for each prediction and the prediction columns for the dependent variable.

If type = 'response', then the table has a single additional column with the prediction value of the response. The type of this column depends on the type of the response variable used during training.

If type = 'prob', then the table has multiple additional columns, one for each possible value of the response variable. The columns are labeled as 'estimated_prob_dep_value', where dep_value represents each value of the response variable.

type
TEXT, optional, default: 'response'. For regression trees, the output is always the predicted value of the dependent variable. For classification trees, the type variable can be 'response', giving the classification prediction as output, or 'prob', giving the class probabilities as output. For each value of the dependent variable, a column with the probabilities is added to the output table.
Note
If the new_data_table contains categories of categorical variables not seen in the training data, the prediction for that row will be NULL.

Display Function
The display function outputs a graph representation of the decision tree. The output can either be in the popular 'dot' format that can be visualized using various programs including those in the GraphViz package, or in a simple text format. The details of the text format are output with the tree.
tree_display(tree_model, dot_format, verbosity)

An additional display function is provided to output the surrogate splits chosen for each internal node:

tree_surr_display(tree_model)

The output contains the list of surrogate splits for each internal node. The nodes are sorted in ascending order by id. This is equivalent to viewing the tree in a breadth-first manner. For each surrogate, we output the surrogate split (variable and threshold) and also give the number of rows that were common between the primary split and the surrogate split. Finally, the number of rows present in the majority branch of the primary split is also shown. Only surrogates that perform better than this majority branch are included in the surrogate list. When the primary variable has a NULL value the surrogate variables are used in order to compute the split for that node. If all surrogates variables are NULL, then the majority branch is used to compute the split for a tuple.

Arguments

tree_model
TEXT. Name of the table containing the decision tree model.
dot_format
BOOLEAN, default = TRUE. Output can either be in a dot format or a text format. If TRUE, the result is in the dot format, else output is in text format.
verbosity
BOOLEAN, default = FALSE. If set to TRUE, the dot format output will contain additional information (impurity, sample size, number of weighted rows for each response variable, classification or prediction if the tree was pruned at this level)

The output is always returned as a 'TEXT'. For the dot format, the output can be redirected to a file on the client side and then rendered using visualization programs.

To export the dot format result to an external file, use the method below. Use unaligned table output mode for psql with '-A' flag. And inside the psql client, both '\t' and '\o' should be used):

> # under bash
> psql -A my_database
# -- in psql now
# \t
# \o test.dot -- export to a file
# select madlib.tree_display('tree_out');
# \o
# \t

After the dot file has been generated, use third-party plotting software to plot the trees in a nice format:

> # under bash, convert the dot file into a PDF file
> dot -Tpdf test.dot > test.pdf
> xpdf test.pdf&

Please see the examples below for more details on the contents of the tree output formats.

Examples

Decision Tree Classification Example

  1. Prepare input data:
    DROP TABLE IF EXISTS dt_golf;
    CREATE TABLE dt_golf (
        id integer NOT NULL,
        "OUTLOOK" text,
        temperature double precision,
        humidity double precision,
        windy text,
        class text
    );
    
    COPY dt_golf (id,"OUTLOOK",temperature,humidity,windy,class) FROM stdin WITH DELIMITER '|';
    1|sunny|85|85|'false'|'Don't Play'
    2|sunny|80|90|'true'|'Don't Play'
    3|overcast|83|78|'false'|'Play'
    4|rain|70|96|'false'|'Play'
    5|rain|68|80|'false'|'Play'
    6|rain|65|70|'true'|'Don't Play'
    7|overcast|64|65|'true'|'Play'
    8|sunny|72|95|'false'|'Don't Play'
    9|sunny|69|70|'false'|'Play'
    10|rain|75|80|'false'|'Play'
    11|sunny|75|70|'true'|'Play'
    12|overcast|72|90|'true'|'Play'
    13|overcast|81|75|'false'|'Play'
    14|rain|71|80|'true'|'Don't Play'
    \.
    
  2. Run the decision tree training function:
    DROP TABLE IF EXISTS train_output, train_output_summary;
    SELECT madlib.tree_train('dt_golf',         -- source table
                             'train_output',    -- output model table
                             'id',              -- id column
                             'class',           -- response
                             '"OUTLOOK", temperature, humidity, windy',   -- features
                             NULL::text,        -- exclude columns
                             'gini',            -- split criterion
                             NULL::text,        -- no grouping
                             NULL::text,        -- no weights
                             5,                 -- max depth
                             3,                 -- min split
                             1,                 -- min bucket
                             10                 -- number of bins per continuous variable
                             );
    
  3. Predict output categories for the same data that was used for input:
    DROP TABLE IF EXISTS prediction_results;
    SELECT madlib.tree_predict('train_output',          -- tree model
                               'dt_golf',               -- new data table
                               'prediction_results',    -- output table
                               'response');             -- show prediction
    SELECT * FROM prediction_results ORDER BY id;
    
    Result:
     id | estimated_class
    ----+-----------------
      1 | 'Don't Play'
      2 | 'Don't Play'
      3 | 'Play'
      4 | 'Play'
      5 | 'Play'
      6 | 'Don't Play'
      7 | 'Play'
      8 | 'Don't Play'
      9 | 'Play'
     10 | 'Play'
     11 | 'Play'
     12 | 'Play'
     13 | 'Play'
     14 | 'Don't Play'
    (14 rows)
    
  4. Create a text display of the tree:
    SELECT madlib.tree_display('train_output', FALSE);
    
    Result:
     -------------------------------------
     - Each node represented by 'id' inside ().
     - Leaf nodes have a * while internal nodes have the split condition at the end.
     - For each internal node (i), it's children will be at (2i+1) and (2i+2).
     - For each split the first indented child (2i+1) is the 'True' node and
    second indented child (2i+2) is the 'False' node.
     - Number of (weighted) rows for each response variable inside [].
     - Order of values = ['"Don\'t Play"', '"Play"']
     -------------------------------------
    (0)[ 5 9]  "OUTLOOK"<={overcast}
      (1)[ 0 4]  *
      (2)[ 5 5]  temperature<=75
        (5)[ 3 5]  temperature<=65
          (11)[ 1 0]  *
          (12)[ 2 5]  temperature<=70
            (25)[ 0 3]  *
            (26)[ 2 2]  temperature<=72
              (53)[ 2 0]  *
              (54)[ 0 2]  *
        (6)[ 2 0]  *
     -------------------------------------
    
    Here are some more details on how to interpret the tree display above... Node numbering starts at 0 for the root node and would be contiguous 1,2...n if the tree was completely full (no pruning). Since the tree has been pruned, the node numbering is not contiguous. The order of values [x y] indicates the number of weighted rows that correspond to ["Don't play" "Play"] before the node test. For example, at the root node 0, there are 5 rows that "Don't play" and 9 rows that "Play" in the raw data. If we apply the test of "OUTLOOK" being overcast, then the True result is leaf node 1 which is "Play". There are 0 "Don't play" rows and 4 "Play" rows that correspond to this case (overcast). The remaining 5 "Don't play" rows and 5 "Play rows" are then tested at node 2 on temperature<=75. And so on down the tree.
  5. Create a dot format display of the tree:
    SELECT madlib.tree_display('train_output', TRUE);
    
    Result:
    digraph "Classification tree for dt_golf" {
             subgraph "cluster0"{
             label=""
    "g0_0" [label="\"OUTLOOK"<={overcast}", shape=ellipse];
    "g0_0" -> "g0_1"[label="yes"];
    "g0_1" [label=""Play"",shape=box];
    "g0_0" -> "g0_2"[label="no"];
    "g0_2" [label="temperature<=75", shape=ellipse];
    "g0_2" -> "g0_5"[label="yes"];
    "g0_2" -> "g0_6"[label="no"];
    "g0_6" [label=""Don't Play"",shape=box];
    "g0_5" [label="temperature<=65", shape=ellipse];
    "g0_5" -> "g0_11"[label="yes"];
    "g0_11" [label=""Don't Play"",shape=box];
    "g0_5" -> "g0_12"[label="no"];
    "g0_12" [label="temperature<=70", shape=ellipse];
    "g0_12" -> "g0_25"[label="yes"];
    "g0_25" [label=""Play"",shape=box];
    "g0_12" -> "g0_26"[label="no"];
    "g0_26" [label="temperature<=72", shape=ellipse];
    "g0_26" -> "g0_53"[label="yes"];
    "g0_53" [label=""Don't Play"",shape=box];
    "g0_26" -> "g0_54"[label="no"];
    "g0_54" [label=""Play"",shape=box];
       } //--- end of subgraph------------
     } //---end of digraph---------
    
  6. Now create a dot format display of the tree with additional information:
    SELECT madlib.tree_display('train_output', TRUE, TRUE);
    
    Result:
    digraph "Classification tree for dt_golf" {
             subgraph "cluster0"{
             label=""
    "g0_0" [label="\"OUTLOOK" in {overcast}\n impurity = 0.459184\n samples = 14\n value = [5 9]\n class = "'Play'"", shape=ellipse];
    "g0_0" -> "g0_1"[label="yes"];
    "g0_1" [label=""'Play'"\n samples = 4\n value = [0 4]",shape=box];
    "g0_0" -> "g0_2"[label="no"];
    "g0_2" [label="temperature <= 75\n impurity = 0.5\n samples = 10\n value = [5 5]\n class = "'Don't Play'"", shape=ellipse];
    "g0_2" -> "g0_5"[label="yes"];
    "g0_2" -> "g0_6"[label="no"];
    "g0_6" [label=""'Don't Play'"\n samples = 2\n value = [2 0]",shape=box];
    "g0_5" [label="temperature <= 65\n impurity = 0.46875\n samples = 8\n value = [3 5]\n class = "'Play'"", shape=ellipse];
    "g0_5" -> "g0_11"[label="yes"];
    "g0_11" [label=""'Don't Play'"\n samples = 1\n value = [1 0]",shape=box];
    "g0_5" -> "g0_12"[label="no"];
    "g0_12" [label="temperature <= 70\n impurity = 0.408163\n samples = 7\n value = [2 5]\n class = "'Play'"", shape=ellipse];
    "g0_12" -> "g0_25"[label="yes"];
    "g0_25" [label=""'Play'"\n samples = 3\n value = [0 3]",shape=box];
    "g0_12" -> "g0_26"[label="no"];
    "g0_26" [label="temperature <= 72\n impurity = 0.5\n samples = 4\n value = [2 2]\n class = "'Don't Play'"", shape=ellipse];
    "g0_26" -> "g0_53"[label="yes"];
    "g0_53" [label=""'Don't Play'"\n samples = 2\n value = [2 0]",shape=box];
    "g0_26" -> "g0_54"[label="no"];
    "g0_54" [label=""'Play'"\n samples = 2\n value = [0 2]",shape=box];
       } //--- end of subgraph------------
     } //---end of digraph---------
    
    The additional information in each node is: impurity, sample size, number of weighted rows for each response variable, and classification if the tree was pruned at this level.

Decision Tree Regression Example

  1. Prepare input data.
    DROP TABLE IF EXISTS mt_cars;
    CREATE TABLE mt_cars (
        id integer NOT NULL,
        mpg double precision,
        cyl integer,
        disp double precision,
        hp integer,
        drat double precision,
        wt double precision,
        qsec double precision,
        vs integer,
        am integer,
        gear integer,
        carb integer
    );
    
    COPY mt_cars (id,mpg,cyl,disp,hp,drat,wt,qsec,vs,am,gear,carb) FROM stdin WITH DELIMITER '|' NULL 'null';
    1|18.7|8|360|175|3.15|3.44|17.02|0|0|3|2
    2|21|6|160|110|3.9|2.62|16.46|0|1|4|4
    3|24.4|4|146.7|62|3.69|3.19|20|1|0|4|2
    4|21|6|160|110|3.9|2.875|17.02|0|1|4|4
    5|17.8|6|167.6|123|3.92|3.44|18.9|1|0|4|4
    6|16.4|8|275.8|180|3.078|4.07|17.4|0|0|3|3
    7|22.8|4|108|93|3.85|2.32|18.61|1|1|4|1
    8|17.3|8|275.8|180|3.078|3.73|17.6|0|0|3|3
    9|21.4|null|258|110|3.08|3.215|19.44|1|0|3|1
    10|15.2|8|275.8|180|3.078|3.78|18|0|0|3|3
    11|18.1|6|225|105|2.768|3.46|20.22|1|0|3|1
    12|32.4|4|78.7|66|4.08|2.20|19.47|1|1|4|1
    13|14.3|8|360|245|3.21|3.578|15.84|0|0|3|4
    14|22.8|4|140.8|95|3.92|3.15|22.9|1|0|4|2
    15|30.4|4|75.7|52|4.93|1.615|18.52|1|1|4|2
    16|19.2|6|167.6|123|3.92|3.44|18.3|1|0|4|4
    17|33.9|4|71.14|65|4.22|1.835|19.9|1|1|4|1
    18|15.2|null|304|150|3.15|3.435|17.3|0|0|3|2
    19|10.4|8|472|205|2.93|5.25|17.98|0|0|3|4
    20|27.3|4|79|66|4.08|1.935|18.9|1|1|4|1
    21|10.4|8|460|215|3|5.424|17.82|0|0|3|4
    22|26|4|120.3|91|4.43|2.14|16.7|0|1|5|2
    23|14.7|8|440|230|3.23|5.345|17.42|0|0|3|4
    24|30.4|4|95.14|113|3.77|1.513|16.9|1|1|5|2
    25|21.5|4|120.1|97|3.70|2.465|20.01|1|0|3|1
    26|15.8|8|351|264|4.22|3.17|14.5|0|1|5|4
    27|15.5|8|318|150|2.768|3.52|16.87|0|0|3|2
    28|15|8|301|335|3.54|3.578|14.6|0|1|5|8
    29|13.3|8|350|245|3.73|3.84|15.41|0|0|3|4
    30|19.2|8|400|175|3.08|3.845|17.05|0|0|3|2
    31|19.7|6|145|175|3.62|2.77|15.5|0|1|5|6
    32|21.4|4|121|109|4.11|2.78|18.6|1|1|4|2
    \.
    
  2. Run the decision tree training function:
    DROP TABLE IF EXISTS train_output, train_output_summary;
    SELECT madlib.tree_train('mt_cars',         -- source table
                             'train_output',    -- output model table
                             'id',              -- id column
                             'mpg',             -- dependent variable
                             '*',               -- features
                             'id, hp, drat, am, gear, carb',  -- exclude columns
                             'mse',             -- split criterion
                             NULL::text,        -- no grouping
                             NULL::text,        -- no weights
                             10,                -- max depth
                             8,                 -- min split
                             3,                 -- number of bins per continuous variable
                             10,                -- number of splits
                             NULL,              -- pruning parameters
                             'max_surrogates=2' -- number of surrogates
                             );
    
  3. Display the decision tree in basic text format:
    SELECT madlib.tree_display('train_output', FALSE);
    
    Result:
      -------------------------------------
     - Each node represented by 'id' inside ().
     - Each internal nodes has the split condition at the end, while each
         leaf node has a * at the end.
     - For each internal node (i), its child nodes are indented by 1 level
         with ids (2i+1) for True node and (2i+2) for False node.
     - Number of rows and average response value inside []. For a leaf node, this is the prediction.
     -------------------------------------
     (0)[32, 20.0906]  cyl in {8,6}
        (1)[21, 16.6476]  disp <= 258
           (3)[7, 19.7429]  *
           (4)[14, 15.1]  qsec <= 17.42
              (9)[10, 15.81]  qsec <= 16.9
                 (19)[5, 14.78]  *
                 (20)[5, 16.84]  *
              (10)[4, 13.325]  *
        (2)[11, 26.6636]  wt <= 2.2
           (5)[6, 30.0667]  *
           (6)[5, 22.58]  *
      -------------------------------------
    (1 row)
    
  4. Display the surrogates in the decision tree:
    SELECT madlib.tree_surr_display('train_output');
    
    Result:
     -------------------------------------
           Surrogates for internal nodes
     -------------------------------------
     (0) cyl in {8,6}
          1: disp > 146.7    [common rows = 29]
          2: vs in {0}    [common rows = 26]
          [Majority branch = 19 ]
     (1) disp <= 258
          1: cyl in {6,4}    [common rows = 19]
          2: vs in {1}    [common rows = 18]
          [Majority branch = 14 ]
     (2) wt <= 2.2
          1: disp <= 108    [common rows = 9]
          2: qsec <= 18.52    [common rows = 8]
          [Majority branch = 6 ]
     (4) qsec <= 17.42
          1: disp > 275.8    [common rows = 11]
          2: vs in {0}    [common rows = 10]
          [Majority branch = 10 ]
     (9) qsec <= 16.9
          1: wt <= 3.84    [common rows = 8]
          2: disp <= 360    [common rows = 7]
          [Majority branch = 5 ]
     -------------------------------------
    (1 row)
    
    Note
    The 'cyl' parameter above has two tuples with null values. In the prediction example below, the surrogate splits for the cyl in {8, 6} split are used to predict those two tuples (id = 9 and id = 18). The splits are used in descending order till a surrogate variable is found that is not NULL. In this case, the two tuples have non-NULL values for disp, hence the disp > 146.7 split is used to make the prediction. If all the surrogate variables had been NULL then the majority branch would have been followed.
  5. Predict regression output for the same data and compare with original:
    DROP TABLE IF EXISTS prediction_results;
    SELECT madlib.tree_predict('train_output',
                               'mt_cars',
                               'prediction_results',
                               'response');
    SELECT s.id, mpg, estimated_mpg FROM prediction_results p, mt_cars s where s.id = p.id ORDER BY id;
    
    Result:
      id | mpg  |  estimated_mpg
    ----+------+------------------
      1 | 18.7 |            16.84
      2 |   21 | 19.7428571428571
      3 | 24.4 |            22.58
      4 |   21 | 19.7428571428571
      5 | 17.8 | 19.7428571428571
      6 | 16.4 |            16.84
      7 | 22.8 |            22.58
      8 | 17.3 |           13.325
      9 | 21.4 | 19.7428571428571
     10 | 15.2 |           13.325
     11 | 18.1 | 19.7428571428571
     12 | 32.4 | 30.0666666666667
     13 | 14.3 |            14.78
     14 | 22.8 |            22.58
     15 | 30.4 | 30.0666666666667
     16 | 19.2 | 19.7428571428571
     17 | 33.9 | 30.0666666666667
     18 | 15.2 |            16.84
     19 | 10.4 |           13.325
     20 | 27.3 | 30.0666666666667
     21 | 10.4 |           13.325
     22 |   26 | 30.0666666666667
     23 | 14.7 |            16.84
     24 | 30.4 | 30.0666666666667
     25 | 21.5 |            22.58
     26 | 15.8 |            14.78
     27 | 15.5 |            14.78
     28 |   15 |            14.78
     29 | 13.3 |            14.78
     30 | 19.2 |            16.84
     31 | 19.7 | 19.7428571428571
     32 | 21.4 |            22.58
    (32 rows)
    

Literature
[1] Breiman, Leo; Friedman, J. H.; Olshen, R. A.; Stone, C. J. (1984). Classification and regression trees. Monterey, CA: Wadsworth & Brooks/Cole Advanced Books & Software.

Related Topics

File decision_tree.sql_in documenting the training function

Random Forest