Latent Dirichlet Allocation (LDA) is a generative probabilistic model for natural texts. It is used in problems such as automated topic discovery, collaborative filtering, and document classification.
In addition to an implementation of LDA, this MADlib module also provides a number of additional helper functions to interpret results of the LDA output.
The LDA model posits that each document is associated with a mixture of various topics (e.g., a document is related to Topic 1 with probability 0.7, and Topic 2 with probability 0.3), and that each word in the document is attributable to one of the document's topics. There is a (symmetric) Dirichlet prior with parameter \( \alpha \) on each document's topic mixture. In addition, there is another (symmetric) Dirichlet prior with parameter \( \beta \) on the distribution of words for each topic.
The following generative process then defines a distribution over a corpus of documents:
In practice, only the words in each document are observable. The topic mixture of each document and the topic for each word in each document are latent unobservable variables that need to be inferred from the observables, and this is referred to as the inference problem for LDA. Exact inference is intractable, but several approximate inference algorithms for LDA have been developed. The simple and effective Gibbs sampling algorithm described in Griffiths and Steyvers [2] appears to be the current algorithm of choice.
This implementation provides a parallel and scalable in-database solution for LDA based on Gibbs sampling. It takes advantage of the shared-nothing MPP architecture and is a different implementation than one would find for MPI or map/reduce.
lda_train( data_table, model_table, output_data_table, voc_size, topic_num, iter_num, alpha, beta, evaluate_every, perplexity_tol )Arguments
TEXT. Name of the table storing the training dataset. Each row is in the form <docid, wordid, count>
where docid
, wordid
, and count
are non-negative integers. The docid
column refers to the document ID, the wordid
column is the word ID (the index of a word in the vocabulary), and count
is the number of occurrences of the word in the document. Please note:
wordid
must be contiguous integers going from from 0 to voc_size
− 1
.docid
, wordid
, and count
are currently fixed, so you must use these exact names in the data_table.The function Term Frequency can be used to generate vocabulary in the required format from raw documents.
voc_size | INTEGER. Size of the vocabulary. As mentioned above for the input table, wordid consists of contiguous integers going from 0 to voc_size − 1 . |
---|---|
topic_num | INTEGER. Number of topics. |
alpha | DOUBLE PRECISION. Dirichlet prior for the per-document topic multinomial. |
beta | DOUBLE PRECISION. Dirichlet prior for the per-topic word multinomial. |
model | BIGINT[]. The encoded model description (not human readable). |
num_iterations | INTEGER. Number of iterations that training ran for, which may be less than the maximum value specified in the parameter 'iter_num' if the perplexity tolerance was reached. |
perplexity | DOUBLE PRECISION[]. Array of perplexity values as per the 'evaluate_every' parameter. For example, if 'evaluate_every=5' this would be an array of perplexity values for every 5th iteration, plus the last iteration. |
perplexity_iters | INTEGER[]. Array indicating the iterations for which perplexity is calculated, as derived from the parameters 'iter_num' and 'evaluate_every'. For example, if 'iter_num=5' and 'evaluate_every=2', then 'perplexity_iters' value would be {2,4,5} indicating that perplexity is computed at iterations 2, 4 and 5 (at the end), unless of course it terminated earlier due to 'perplexity_tol'. If 'iter_num=5' and 'evaluate_every=1', then 'perplexity_iters' value would be {1,2,3,4,5} indicating that perplexity is computed at every iteration, again assuming it ran the full number of iterations. |
docid | INTEGER. Document id from input 'data_table'. |
---|---|
wordcount | INTEGER. Count of number of words in the document, including repeats. For example, if a word appears 3 times in the document, it is counted 3 times. |
words | INTEGER[]. Array of wordid in the document, not including repeats. For example, if a word appears 3 times in the document, it appears only once in the words array. |
counts | INTEGER[]. Frequency of occurance of a word in the document, indexed the same as the words array above. For example, if the 2nd element of the counts array is 4, it means that the word in the 2nd element of the words array occurs 4 times in the document. |
topic_count | INTEGER[]. Array of the count of words in the document that correspond to each topic. This array is of length topic_num . Topic ids are continuous integers going from 0 to topic_num − 1 . |
topic_assignment | INTEGER[]. Array indicating which topic each word in the document corresponds to. This array is of length wordcount . Words that are repeated n times in the document will show up consecutively n times in this array. |
wordid
consists of continuous integers going from 0 to voc_size
− 1
. Prediction involves labelling test documents using a learned LDA model:
lda_predict( data_table, model_table, output_predict_table );
Arguments
lda_get_perplexity( model_table, output_data_table );Arguments
The helper functions can help to interpret the output from LDA training and LDA prediction.
Topic description by top-k words with highest probability
Applies to LDA training only.
lda_get_topic_desc( model_table, vocab_table, output_table, top_k )
Arguments
term_frequency
function (Term Frequency) with the parameter compute_vocab
set to TRUE. topicid | INTEGER. Topic id. |
---|---|
wordid | INTEGER. Word id. |
prob | DOUBLE PRECISION. Probability that this topic will generate the word. |
word | TEXT. Word in text form. |
Per-word topic counts
Applies to LDA training only.
lda_get_word_topic_count( model_table, output_table )
Arguments
wordid | INTEGER. Word id. |
---|---|
topic_count | INTEGER[]. Count of word association with each topic, i.e., shows how many times a given word is assigned to a topic. Array is of length number of topics. |
Per-topic word counts
Applies to LDA training only.
lda_get_topic_word_count( model_table, output_table )
Arguments
topicid | INTEGER. Topic id. |
---|---|
word_count | INTEGER[]. Array showing which words are associated with the topic by frequency. Array is of length number of words. |
Per-document word to topic mapping
Applies to both LDA training and LDA prediction.
lda_get_word_topic_mapping( output_data_table, -- From training or prediction output_table )
Arguments
docid | INTEGER. Document id. |
---|---|
wordid | INTEGER. Word id. |
topicid | INTEGER. Topic id. |
DROP TABLE IF EXISTS documents; CREATE TABLE documents(docid INT4, contents TEXT); INSERT INTO documents VALUES (0, 'Statistical topic models are a class of Bayesian latent variable models, originally developed for analyzing the semantic content of large document corpora.'), (1, 'By the late 1960s, the balance between pitching and hitting had swung in favor of the pitchers. In 1968 Carl Yastrzemski won the American League batting title with an average of just .301, the lowest in history.'), (2, 'Machine learning is closely related to and often overlaps with computational statistics; a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and application domains to the field.'), (3, 'California''s diverse geography ranges from the Sierra Nevada in the east to the Pacific Coast in the west, from the Redwood–Douglas fir forests of the northwest, to the Mojave Desert areas in the southeast. The center of the state is dominated by the Central Valley, a major agricultural area.');You can apply stemming, stop word removal and tokenization at this point in order to prepare the documents for text processing. Depending upon your database version, various tools are available. Databases based on more recent versions of PostgreSQL may do something like:
SELECT tsvector_to_array(to_tsvector('english',contents)) from documents;
tsvector_to_array +----------------------------------------------------------------------- {analyz,bayesian,class,content,corpora,develop,document,larg,...} {1960s,1968,301,american,averag,balanc,bat,carl,favor,histori,...} {also,applic,close,comput,deliv,disciplin,domain,field,learn,...} {agricultur,area,california,center,central,coast,desert,divers,...} (4 rows)In this example, we assume a database based on an older version of PostgreSQL and just perform basic punctuation removal and tokenization. The array of words is added as a new column to the documents table:
ALTER TABLE documents ADD COLUMN words TEXT[]; UPDATE documents SET words = regexp_split_to_array(lower( regexp_replace(contents, E'[,.;\']','', 'g') ), E'[\\s+]'); \x on SELECT * FROM documents ORDER BY docid;
-[ RECORD 1 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- docid | 0 contents | Statistical topic models are a class of Bayesian latent variable models, originally developed for analyzing the semantic content of large document corpora. words | {statistical,topic,models,are,a,class,of,bayesian,latent,variable,models,originally,developed,for,analyzing,the,semantic,content,of,large,document,corpora} -[ RECORD 2 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- docid | 1 contents | By the late 1960s, the balance between pitching and hitting had swung in favor of the pitchers. In 1968 Carl Yastrzemski won the American League batting title with an average of just .301, the lowest in history. words | {by,the,late,1960s,the,balance,between,pitching,and,hitting,had,swung,in,favor,of,the,pitchers,in,1968,carl,yastrzemski,won,the,american,league,batting,title,with,an,average,of,just,301,the,lowest,in,history} -[ RECORD 3 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- docid | 2 contents | Machine learning is closely related to and often overlaps with computational statistics; a discipline that also specializes in prediction-making. It has strong ties to mathematical optimization, which deliver methods, theory and application domains to the field. words | {machine,learning,is,closely,related,to,and,often,overlaps,with,computational,statistics,a,discipline,that,also,specializes,in,prediction-making,it,has,strong,ties,to,mathematical,optimization,which,deliver,methods,theory,and,application,domains,to,the,field} -[ RECORD 4 ]--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- docid | 3 contents | California's diverse geography ranges from the Sierra Nevada in the east to the Pacific Coast in the west, from the Redwood–Douglas fir forests of the northwest, to the Mojave Desert areas in the southeast. The center of the state is dominated by the Central Valley, a major agricultural area. words | {californias,diverse,geography,ranges,from,the,sierra,nevada,in,the,east,to,the,pacific,coast,in,the,west,from,the,redwood–douglas,fir,forests,of,the,northwest,to,the,mojave,desert,areas,in,the,southeast,the,center,of,the,state,is,dominated,by,the,central,valley,a,major,agricultural,area}
term_frequency
function (Term Frequency). DROP TABLE IF EXISTS documents_tf, documents_tf_vocabulary; SELECT madlib.term_frequency('documents', -- input table 'docid', -- document id column 'words', -- vector of words in document 'documents_tf', -- output documents table with term frequency TRUE); -- TRUE to created vocabulary table \x off SELECT * FROM documents_tf ORDER BY docid LIMIT 20;
docid | wordid | count -------+--------+------- 0 | 71 | 1 0 | 90 | 1 0 | 56 | 1 0 | 68 | 2 0 | 85 | 1 0 | 28 | 1 0 | 35 | 1 0 | 54 | 1 0 | 64 | 2 0 | 8 | 1 0 | 29 | 1 0 | 80 | 1 0 | 24 | 1 0 | 11 | 1 0 | 17 | 1 0 | 32 | 1 0 | 3 | 1 0 | 42 | 1 0 | 97 | 1 0 | 95 | 1 (20 rows)Here is the associated vocabulary table. Note that wordid starts at 0:
SELECT * FROM documents_tf_vocabulary ORDER BY wordid LIMIT 20;
wordid | word --------+-------------- 0 | 1960s 1 | 1968 2 | 301 3 | a 4 | agricultural 5 | also 6 | american 7 | an 8 | analyzing 9 | and 10 | application 11 | are 12 | area 13 | areas 14 | average 15 | balance 16 | batting 17 | bayesian 18 | between 19 | by (20 rows)The total number of words in the vocabulary across all documents is:
SELECT COUNT(*) FROM documents_tf_vocabulary;
count +------ 103 (1 row)
DROP TABLE IF EXISTS lda_model, lda_output_data; SELECT madlib.lda_train( 'documents_tf', -- documents table in the form of term frequency 'lda_model', -- model table created by LDA training (not human readable) 'lda_output_data', -- readable output data table 103, -- vocabulary size 5, -- number of topics 10, -- number of iterations 5, -- Dirichlet prior for the per-doc topic multinomial (alpha) 0.01 -- Dirichlet prior for the per-topic word multinomial (beta) ); \x on SELECT * FROM lda_output_data ORDER BY docid;
-[ RECORD 1 ]----+------------------------------------------------------------------------------------------------------ docid | 0 wordcount | 22 words | {24,17,11,95,90,85,68,54,42,35,28,8,3,97,80,71,64,56,32,29} counts | {1,1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,2,1,1,1} topic_count | {4,2,4,3,9} topic_assignment | {4,2,4,1,2,1,2,2,0,3,4,4,3,0,0,4,0,4,4,4,3,4} -[ RECORD 2 ]----+------------------------------------------------------------------------------------------------------ docid | 1 wordcount | 37 words | {1,50,49,46,19,16,14,9,7,0,90,68,57,102,101,100,93,88,75,74,59,55,53,48,39,21,18,15,6,2} counts | {1,3,1,1,1,1,1,1,1,1,5,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1} topic_count | {2,5,14,9,7} topic_assignment | {0,3,3,3,1,4,2,2,2,1,3,1,2,2,2,2,2,2,2,1,4,3,2,0,4,2,4,2,3,4,3,1,3,4,3,2,4} -[ RECORD 3 ]----+------------------------------------------------------------------------------------------------------ docid | 2 wordcount | 36 words | {10,27,33,40,47,51,58,62,63,69,72,83,100,99,94,92,91,90,89,87,86,79,76,70,60,52,50,36,30,25,9,5,3} counts | {1,1,1,1,1,1,1,1,1,1,1,1,1,1,3,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,2,1,1} topic_count | {15,10,1,7,3} topic_assignment | {0,3,1,3,0,0,3,3,1,0,1,0,0,0,0,1,1,0,4,2,0,4,1,0,1,0,0,4,3,3,3,0,1,1,1,0} -[ RECORD 4 ]----+------------------------------------------------------------------------------------------------------ docid | 3 wordcount | 49 words | {77,78,81,82,67,65,51,45,44,43,34,26,13,98,96,94,90,84,73,68,66,61,50,41,38,37,31,23,22,20,19,12,4,3} counts | {1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,2,11,1,1,2,1,1,3,1,1,1,1,1,1,1,1,1,1,1} topic_count | {5,5,26,5,8} topic_assignment | {4,4,4,0,2,0,0,2,4,4,2,2,2,1,2,4,1,0,2,2,2,2,2,2,2,2,2,2,2,1,2,2,2,2,4,3,3,3,2,3,2,3,2,1,4,2,2,1,0}Review summary table:
SELECT voc_size, topic_num, alpha, beta, num_iterations, perplexity, perplexity_iters from lda_model;
-[ RECORD 1 ]----+----- voc_size | 103 topic_num | 5 alpha | 5 beta | 0.01 num_iterations | 10 perplexity | perplexity_iters |
DROP TABLE IF EXISTS helper_output_table; SELECT madlib.lda_get_topic_desc( 'lda_model', -- LDA model generated in training 'documents_tf_vocabulary', -- vocabulary table that maps wordid to word 'helper_output_table', -- output table for per-topic descriptions 5); -- k: number of top words for each topic \x off SELECT * FROM helper_output_table ORDER BY topicid, prob DESC LIMIT 40;
topicid | wordid | prob | word ---------+--------+--------------------+------------------- 0 | 3 | 0.111357750647429 | a 0 | 51 | 0.074361820199778 | is 0 | 94 | 0.074361820199778 | to 0 | 70 | 0.0373658897521273 | optimization 0 | 82 | 0.0373658897521273 | southeast 0 | 60 | 0.0373658897521273 | machine 0 | 71 | 0.0373658897521273 | originally 0 | 69 | 0.0373658897521273 | often 0 | 99 | 0.0373658897521273 | which 0 | 83 | 0.0373658897521273 | specializes 0 | 1 | 0.0373658897521273 | 1968 0 | 97 | 0.0373658897521273 | variable 0 | 25 | 0.0373658897521273 | closely 0 | 93 | 0.0373658897521273 | title 0 | 47 | 0.0373658897521273 | has 0 | 65 | 0.0373658897521273 | mojave 0 | 79 | 0.0373658897521273 | related 0 | 89 | 0.0373658897521273 | that 0 | 10 | 0.0373658897521273 | application 0 | 100 | 0.0373658897521273 | with 0 | 92 | 0.0373658897521273 | ties 0 | 54 | 0.0373658897521273 | large 1 | 94 | 0.130699088145897 | to 1 | 9 | 0.130699088145897 | and 1 | 5 | 0.0438558402084238 | also 1 | 57 | 0.0438558402084238 | league 1 | 49 | 0.0438558402084238 | hitting 1 | 13 | 0.0438558402084238 | areas 1 | 39 | 0.0438558402084238 | favor 1 | 85 | 0.0438558402084238 | statistical 1 | 95 | 0.0438558402084238 | topic 1 | 0 | 0.0438558402084238 | 1960s 1 | 76 | 0.0438558402084238 | prediction-making 1 | 86 | 0.0438558402084238 | statistics 1 | 84 | 0.0438558402084238 | state 1 | 72 | 0.0438558402084238 | overlaps 1 | 22 | 0.0438558402084238 | center 1 | 4 | 0.0438558402084238 | agricultural 1 | 63 | 0.0438558402084238 | methods 1 | 33 | 0.0438558402084238 | discipline (40 rows)Get the per-word topic counts. This mapping shows how many times a given word is assigned to a topic. E.g., wordid 3 is assigned to topicid 0 three times:
DROP TABLE IF EXISTS helper_output_table; SELECT madlib.lda_get_word_topic_count( 'lda_model', -- LDA model generated in training 'helper_output_table'); -- output table for per-word topic counts SELECT * FROM helper_output_table ORDER BY wordid LIMIT 20;
wordid | topic_count --------+------------- 0 | {0,1,0,0,0} 1 | {1,0,0,0,0} 2 | {1,0,0,0,0} 3 | {3,0,0,0,0} 4 | {0,0,0,0,1} 5 | {0,1,0,0,0} 6 | {1,0,0,0,0} 7 | {0,0,0,1,0} 8 | {0,1,0,0,0} 9 | {0,0,0,3,0} 10 | {1,0,0,0,0} 11 | {1,0,0,0,0} 12 | {0,0,1,0,0} 13 | {0,0,0,0,1} 14 | {0,1,0,0,0} 15 | {0,0,0,0,1} 16 | {0,1,0,0,0} 17 | {0,0,1,0,0} 18 | {1,0,0,0,0} 19 | {2,0,0,0,0} (20 rows)Get the per-topic word counts. This mapping shows which words are associated with each topic by frequency:
DROP TABLE IF EXISTS topic_word_count; SELECT madlib.lda_get_topic_word_count( 'lda_model', 'topic_word_count'); \x on SELECT * FROM topic_word_count ORDER BY topicid;
-[ RECORD 1 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- topicid | 1 word_count | {1,1,0,0,0,0,0,1,1,0,1,0,0,0,0,1,0,0,1,0,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0,0,1,0,1,1,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0,1,0,0,1,1,0,0,0,0,0,0,0,1,0} -[ RECORD 2 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- topicid | 2 word_count | {0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,1,0,1,0,1,1,2,0,1,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,0,0,0,0,4,0,0,0,0,1,0,0,1,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,0,5,0,1,0,0,1,0,0,0} -[ RECORD 3 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- topicid | 3 word_count | {0,0,0,0,0,0,0,0,0,3,0,1,0,1,1,0,0,0,0,2,0,0,0,0,1,0,0,1,0,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,2,0,0,0,0,0,0,0,0,0,1,0,0,2,1,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,1,0,0,0,0,0} -[ RECORD 4 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- topicid | 4 word_count | {0,0,1,0,0,1,1,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,1,0,0,1,0,0,1,0,0,0,1,0,0,1,1,1,0,0,0,1,0,0,0,0,0,0,1,0,7,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,0,0,1,0,0,0,0,1,0,0,0,0,1,1,1,0,0,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,1} -[ RECORD 5 ]---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- topicid | 5 word_count | {0,0,0,3,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,1,1,0,0,1,0,0,1,0,0,0,0,0,0,0,0,18,0,0,0,0,0,0,0,1,0,2,0,0}Get the per-document word to topic mapping:
DROP TABLE IF EXISTS helper_output_table; SELECT madlib.lda_get_word_topic_mapping('lda_output_data', -- Output table from training 'helper_output_table'); \x off SELECT * FROM helper_output_table ORDER BY docid LIMIT 40;
docid | wordid | topicid -------+--------+--------- 0 | 56 | 1 0 | 54 | 1 0 | 42 | 2 0 | 35 | 1 0 | 32 | 1 0 | 29 | 3 0 | 28 | 4 0 | 24 | 3 0 | 17 | 2 0 | 11 | 0 0 | 8 | 1 0 | 3 | 0 0 | 97 | 0 0 | 95 | 3 0 | 90 | 0 0 | 85 | 0 0 | 80 | 2 0 | 71 | 2 0 | 68 | 0 0 | 64 | 1 1 | 2 | 0 1 | 1 | 0 1 | 0 | 1 1 | 102 | 4 1 | 101 | 2 1 | 100 | 1 1 | 93 | 3 1 | 90 | 2 1 | 90 | 0 1 | 88 | 1 1 | 75 | 1 1 | 74 | 3 1 | 68 | 0 1 | 59 | 2 1 | 57 | 4 1 | 55 | 3 1 | 53 | 3 1 | 50 | 0 1 | 49 | 1 1 | 48 | 0 (40 rows)
DROP TABLE IF EXISTS outdata_predict; SELECT madlib.lda_predict( 'documents_tf', -- Document to predict 'lda_model', -- LDA model from training 'outdata_predict' -- Output table for predict results ); \x on SELECT * FROM outdata_predict;
-[ RECORD 1 ]----+------------------------------------------------------------------------------------------------------ docid | 0 wordcount | 22 words | {17,11,28,29,95,3,32,97,85,35,54,80,64,90,8,24,42,71,56,68} counts | {1,1,1,1,1,1,1,1,1,1,1,1,2,1,1,1,1,1,1,2} topic_count | {1,3,16,1,1} topic_assignment | {2,2,1,0,2,2,2,3,2,2,2,2,2,2,4,2,2,2,2,2,1,1} -[ RECORD 2 ]----+------------------------------------------------------------------------------------------------------ docid | 1 wordcount | 37 words | {90,101,2,88,6,7,75,46,74,68,39,9,48,49,102,50,59,53,55,57,100,14,15,16,18,19,93,21,0,1} counts | {5,1,1,1,1,1,1,1,1,2,1,1,1,1,1,3,1,1,1,1,1,1,1,1,1,1,1,1,1,1} topic_count | {0,1,11,6,19} topic_assignment | {4,4,4,4,4,4,4,4,4,2,4,2,2,1,3,2,2,4,4,4,3,3,3,4,3,3,2,4,4,2,2,4,2,4,2,4,2} -[ RECORD 3 ]----+------------------------------------------------------------------------------------------------------ docid | 2 wordcount | 36 words | {90,3,5,9,10,25,27,30,33,36,40,47,50,51,52,58,60,62,63,69,70,72,76,79,83,86,87,89,91,92,94,99,100} counts | {1,1,1,2,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,3,1,1} topic_count | {26,3,5,1,1} topic_assignment | {4,0,0,2,2,0,0,0,0,2,0,0,0,3,0,0,0,0,0,0,0,0,0,2,0,2,0,0,0,0,0,1,1,1,0,0} -[ RECORD 4 ]----+------------------------------------------------------------------------------------------------------ docid | 3 wordcount | 49 words | {41,38,3,77,78,94,37,81,82,19,84,34,96,13,31,98,90,51,26,61,23,22,50,65,66,67,45,44,68,4,12,43,20,73} counts | {1,1,1,1,1,2,1,1,1,1,1,1,1,1,1,1,11,1,1,1,1,1,3,1,1,1,1,2,2,1,1,1,1,1} topic_count | {0,28,0,4,17} topic_assignment | {1,1,4,1,1,1,1,1,1,4,1,1,1,3,1,1,1,4,4,4,4,4,4,4,4,4,4,4,4,1,1,1,4,3,3,3,1,1,4,4,1,1,1,1,1,1,1,1,1}The test table is expected to be in the same form as the training table and can be created with the same process. The LDA prediction results have the same format as the output table generated by the LDA training function.
DROP TABLE IF EXISTS helper_output_table; SELECT madlib.lda_get_word_topic_mapping('outdata_predict', -- Output table from prediction 'helper_output_table'); \x off SELECT * FROM helper_output_table ORDER BY docid LIMIT 40;
docid | wordid | topicid -------+--------+--------- 0 | 54 | 4 0 | 42 | 1 0 | 35 | 4 0 | 32 | 4 0 | 29 | 4 0 | 28 | 1 0 | 24 | 4 0 | 17 | 1 0 | 11 | 4 0 | 8 | 4 0 | 3 | 0 0 | 97 | 4 0 | 95 | 1 0 | 90 | 2 0 | 85 | 4 0 | 80 | 0 0 | 71 | 0 0 | 68 | 0 0 | 64 | 4 0 | 64 | 1 0 | 56 | 4 1 | 2 | 4 1 | 1 | 4 1 | 0 | 2 1 | 102 | 4 1 | 101 | 4 1 | 100 | 4 1 | 93 | 4 1 | 90 | 2 1 | 90 | 0 1 | 88 | 2 1 | 75 | 2 1 | 74 | 0 1 | 68 | 0 1 | 59 | 4 1 | 57 | 2 1 | 55 | 2 1 | 53 | 1 1 | 50 | 0 1 | 49 | 2 (40 rows)
SELECT madlib.lda_get_perplexity( 'lda_model', -- LDA model from training 'outdata_predict' -- Prediction output );
lda_get_perplexity +-------------------- 79.481894411824 (1 row)
DROP TABLE IF EXISTS lda_model_perp, lda_output_data_perp; SELECT madlib.lda_train( 'documents_tf', -- documents table in the form of term frequency 'lda_model_perp', -- model table created by LDA training (not human readable) 'lda_output_data_perp', -- readable output data table 103, -- vocabulary size 5, -- number of topics 30, -- number of iterations 5, -- Dirichlet prior for the per-doc topic multinomial (alpha) 0.01, -- Dirichlet prior for the per-topic word multinomial (beta) 2, -- Evaluate perplexity every n iterations 0.3 -- Tolerance to stop iteration ); \x on SELECT voc_size, topic_num, alpha, beta, num_iterations, perplexity, perplexity_iters from lda_model_perp;
-[ RECORD 1 ]----+---------------------------------------------------------------------------------------------------- voc_size | 103 topic_num | 5 alpha | 5 beta | 0.01 num_iterations | 14 perplexity | {70.0297335165,65.6497887327,70.2040806534,68.2594871716,70.3816093812,67.9193935299,67.6325562682} perplexity_iters | {2,4,6,8,10,12,14}Iterating stops at 14 since the tolerance is reached. There are 7 perplexity values because we computed it only every 2nd iteration to save time. As expected, the perplexity on the training data is that same as the final iteration value:
\x off SELECT madlib.lda_get_perplexity( 'lda_model_perp', 'lda_output_data_perp' );
lda_get_perplexity --------------------+ 67.632556268157
[1] D.M. Blei, A.Y. Ng, M.I. Jordan, Latent Dirichlet Allocation, Journal of Machine Learning Research, vol. 3, pp. 993-1022, 2003.
[2] T. Griffiths and M. Steyvers, Finding scientific topics, PNAS, vol. 101, pp. 5228-5235, 2004.
[3] Y. Wang, H. Bai, M. Stanton, W-Y. Chen, and E.Y. Chang, lda: Parallel Dirichlet Allocation for Large-scale Applications, AAIM, 2009.
[4] http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation
[5] J. Chang, Collapsed Gibbs sampling methods for topic models, R manual, 2010.