text mining - Perplexity in topic modeling -
i have run lda using topic models package on training data. how can determine perplexity of fitted model? read instruction, not sure code should use.
here's have far:
burnin <- 500 iter <- 1000 #keep <- 30 k <- 4 results_training <- lda(dtm_training, k, method = "gibbs", control = list(burnin = burnin, iter = iter)) terms <- terms(results_training, 10) topic <- topics(results_training, 4) # posterior probability each document on each topic posterior <- posterior(results_training)[[2]]
it works perfectly, question how can use perplexity on testing data (results_testing)? , how can interpret result of perplexity?
thanks
Comments
Post a Comment