def _get(self, field): """ Return the value of a given field. The list of all queryable fields is detailed below, and can be obtained with the :py:func:`~TopicModel._list_fields` method. +-----------------------+----------------------------------------------+ | Field | Description | +=======================+==============================================+ | topics | An SFrame containing a column with the unique| | | words observed during training, and a column | | | of arrays containing the probability values | | | for each word given each of the topics. | +-----------------------+----------------------------------------------+ | vocabulary | An SArray containing the words used. This is | | | same as the vocabulary column in the topics | | | field above. | +-----------------------+----------------------------------------------+ Parameters ---------- field : string Name of the field to be retrieved. Returns ------- out Value of the requested field. """ opts = {'model': self.__proxy__, 'field': field} response = _turicreate.toolkits._main.run("text_topicmodel_get_value", opts) if field == 'vocabulary': return _SArray(None, _proxy=response['value']) elif field == 'topics': return _SFrame(None, _proxy=response['value']) return response['value']
def create(dataset, num_topics=10, initial_topics=None, alpha=None, beta=.1, num_iterations=10, num_burnin=5, associations=None, verbose=False, print_interval=10, validation_set=None, method='auto'): """ Create a topic model from the given data set. A topic model assumes each document is a mixture of a set of topics, where for each topic some words are more likely than others. One statistical approach to do this is called a "topic model". This method learns a topic model for the given document collection. Parameters ---------- dataset : SArray of type dict or SFrame with a single column of type dict A bag of words representation of a document corpus. Each element is a dictionary representing a single document, where the keys are words and the values are the number of times that word occurs in that document. num_topics : int, optional The number of topics to learn. initial_topics : SFrame, optional An SFrame with a column of unique words representing the vocabulary and a column of dense vectors representing probability of that word given each topic. When provided, these values are used to initialize the algorithm. alpha : float, optional Hyperparameter that controls the diversity of topics in a document. Smaller values encourage fewer topics per document. Provided value must be positive. Default value is 50/num_topics. beta : float, optional Hyperparameter that controls the diversity of words in a topic. Smaller values encourage fewer words per topic. Provided value must be positive. num_iterations : int, optional The number of iterations to perform. num_burnin : int, optional The number of iterations to perform when inferring the topics for documents at prediction time. verbose : bool, optional When True, print most probable words for each topic while printing progress. print_interval : int, optional The number of iterations to wait between progress reports. associations : SFrame, optional An SFrame with two columns named "word" and "topic" containing words and the topic id that the word should be associated with. These words are not considered during learning. validation_set : SArray of type dict or SFrame with a single column A bag of words representation of a document corpus, similar to the format required for `dataset`. This will be used to monitor model performance during training. Each document in the provided validation set is randomly split: the first portion is used estimate which topic each document belongs to, and the second portion is used to estimate the model's performance at predicting the unseen words in the test data. method : {'cgs', 'alias'}, optional The algorithm used for learning the model. - *cgs:* Collapsed Gibbs sampling - *alias:* AliasLDA method. Returns ------- out : TopicModel A fitted topic model. This can be used with :py:func:`~TopicModel.get_topics()` and :py:func:`~TopicModel.predict()`. While fitting is in progress, several metrics are shown, including: +------------------+---------------------------------------------------+ | Field | Description | +==================+===================================================+ | Elapsed Time | The number of elapsed seconds. | +------------------+---------------------------------------------------+ | Tokens/second | The number of unique words processed per second | +------------------+---------------------------------------------------+ | Est. Perplexity | An estimate of the model's ability to model the | | | training data. See the documentation on evaluate. | +------------------+---------------------------------------------------+ See Also -------- TopicModel, TopicModel.get_topics, TopicModel.predict, turicreate.SArray.dict_trim_by_keys, TopicModel.evaluate References ---------- - `Wikipedia - Latent Dirichlet allocation <http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation>`_ - Alias method: Li, A. et al. (2014) `Reducing the Sampling Complexity of Topic Models. <http://www.sravi.org/pubs/fastlda-kdd2014.pdf>`_. KDD 2014. Examples -------- The following example includes an SArray of documents, where each element represents a document in "bag of words" representation -- a dictionary with word keys and whose values are the number of times that word occurred in the document: >>> docs = turicreate.SArray('https://static.turi.com/datasets/nytimes') Once in this form, it is straightforward to learn a topic model. >>> m = turicreate.topic_model.create(docs) It is also easy to create a new topic model from an old one -- whether it was created using Turi Create or another package. >>> m2 = turicreate.topic_model.create(docs, initial_topics=m['topics']) To manually fix several words to always be assigned to a topic, use the `associations` argument. The following will ensure that topic 0 has the most probability for each of the provided words: >>> from turicreate import SFrame >>> associations = SFrame({'word':['hurricane', 'wind', 'storm'], 'topic': [0, 0, 0]}) >>> m = turicreate.topic_model.create(docs, associations=associations) More advanced usage allows you to control aspects of the model and the learning method. >>> import turicreate as tc >>> m = tc.topic_model.create(docs, num_topics=20, # number of topics num_iterations=10, # algorithm parameters alpha=.01, beta=.1) # hyperparameters To evaluate the model's ability to generalize, we can create a train/test split where a portion of the words in each document are held out from training. >>> train, test = tc.text_analytics.random_split(.8) >>> m = tc.topic_model.create(train) >>> results = m.evaluate(test) >>> print results['perplexity'] """ dataset = _check_input(dataset) _check_categorical_option_type("method", method, ['auto', 'cgs', 'alias']) if method == 'cgs' or method == 'auto': model_name = 'cgs_topic_model' else: model_name = 'alias_topic_model' # If associations are provided, check they are in the proper format if associations is None: associations = _turicreate.SFrame({'word': [], 'topic': []}) if isinstance(associations, _turicreate.SFrame) and \ associations.num_rows() > 0: assert set(associations.column_names()) == set(['word', 'topic']), \ "Provided associations must be an SFrame containing a word column\ and a topic column." assert associations['word'].dtype == str, \ "Words must be strings." assert associations['topic'].dtype == int, \ "Topic ids must be of int type." if alpha is None: alpha = float(50) / num_topics if validation_set is not None: _check_input(validation_set) # Must be a single column if isinstance(validation_set, _turicreate.SFrame): column_name = validation_set.column_names()[0] validation_set = validation_set[column_name] (validation_train, validation_test) = _random_split(validation_set) else: validation_train = _SArray() validation_test = _SArray() opts = { 'model_name': model_name, 'data': dataset, 'num_topics': num_topics, 'num_iterations': num_iterations, 'print_interval': print_interval, 'alpha': alpha, 'beta': beta, 'num_burnin': num_burnin, 'associations': associations } # Initialize the model with basic parameters response = _turicreate.extensions._text.topicmodel_init(opts) m = TopicModel(response['model']) # If initial_topics provided, load it into the model if isinstance(initial_topics, _turicreate.SFrame): assert set(['vocabulary', 'topic_probabilities']) == \ set(initial_topics.column_names()), \ "The provided initial_topics does not have the proper format, \ e.g. wrong column names." observed_topics = initial_topics['topic_probabilities'].apply( lambda x: len(x)) assert all(observed_topics == num_topics), \ "Provided num_topics value does not match the number of provided initial_topics." # Rough estimate of total number of words weight = len(dataset) * 1000 opts = { 'model': m.__proxy__, 'topics': initial_topics['topic_probabilities'], 'vocabulary': initial_topics['vocabulary'], 'weight': weight } response = _turicreate.extensions._text.topicmodel_set_topics(opts) m = TopicModel(response['model']) # Train the model on the given data set and retrieve predictions opts = { 'model': m.__proxy__, 'data': dataset, 'verbose': verbose, 'validation_train': validation_train, 'validation_test': validation_test } response = _turicreate.extensions._text.topicmodel_train(opts) m = TopicModel(response['model']) return m
def predict(self, dataset, output_type='assignment', num_burnin=None): """ Use the model to predict topics for each document. The provided `dataset` should be an SArray object where each element is a dict representing a single document in bag-of-words format, where keys are words and values are their corresponding counts. If `dataset` is an SFrame, then it must contain a single column of dict type. The current implementation will make inferences about each document given its estimates of the topics learned when creating the model. This is done via Gibbs sampling. Parameters ---------- dataset : SArray, SFrame of type dict A set of documents to use for making predictions. output_type : str, optional The type of output desired. This can either be - assignment: the returned values are integers in [0, num_topics) - probability: each returned prediction is a vector with length num_topics, where element k represents the probability that document belongs to topic k. num_burnin : int, optional The number of iterations of Gibbs sampling to perform when inferring the topics for documents at prediction time. If provided this will override the burnin value set during training. Returns ------- out : SArray See Also -------- evaluate Examples -------- Make predictions about which topic each document belongs to. >>> docs = turicreate.SArray('https://static.turi.com/datasets/nips-text') >>> m = turicreate.topic_model.create(docs) >>> pred = m.predict(docs) If one is interested in the probability of each topic >>> pred = m.predict(docs, output_type='probability') Notes ----- For each unique word w in a document d, we sample an assignment to topic k with probability proportional to .. math:: p(z_{dw} = k) \propto (n_{d,k} + \\alpha) * \Phi_{w,k} where - :math:`W` is the size of the vocabulary, - :math:`n_{d,k}` is the number of other times we have assigned a word in document to d to topic :math:`k`, - :math:`\Phi_{w,k}` is the probability under the model of choosing word :math:`w` given the word is of topic :math:`k`. This is the matrix returned by calling `m['topics']`. This represents a collapsed Gibbs sampler for the document assignments while we keep the topics learned during training fixed. This process is done in parallel across all documents, five times per document. """ dataset = _check_input(dataset) if num_burnin is None: num_burnin = self.num_burnin opts = { 'model': self.__proxy__, 'data': dataset, 'num_burnin': num_burnin } response = _turicreate.toolkits._main.run("text_topicmodel_predict", opts) preds = _SArray(None, _proxy=response['predictions']) # Get most likely topic if probabilities are not requested if output_type not in ['probability', 'probabilities', 'prob']: # equivalent to numpy.argmax(x) preds = preds.apply(lambda x: max(_izip(x, _xrange(len(x))))[1]) return preds