In this repository, I implemented Context-based similar documents according to Rich Anchor's blog
Context-based similar documents is a problem that finding the most similar documents of the given document. The blog's approach is using LDA (Latent Dirichlet Allocation) Model to build the generic topics of the documents in database, then vectorize them, and using LSH (Locality Sensitive Hashing) to find the most similar documents (nearest neighbors) of the given document.
This software is written by Python 2.x with LDA Model provided by gensim and LSHForest provided by scikit-learn
This software depends on NumPy, Scikit-learn, Gensim - Python packages for scientific computing. You must have them installed prior to using vnSRL.
The simple way to install them is using pip:
# pip install -U numpy scikit-learn gensim
This software requires 2 data files for training:
- Input data file includes documents in database. Each document is written on each line.
- Stop words file includes stop words which are are filtered out before processing data.
And for query, we need 1 file stored given documents.
To run demo, can download sample input files. Those are extracted from eva.vn provided by Rich Anchor Team:
You can use this software by the following command-line:
python main.py
You can modify source code to fit your data:
- main.py:
input_file
: path to input data filestopwords_file
: path to stop words filenum_topics
: number of topics in LDA Modelprefix_name
: prefix name of saved files (dictionary, corpus, model, etc.)directory
: path to saved data directoryquery
: path to query file
- setting.py: stores default setting
- corpus.py:
get_docs()
: modify to fit data format
- My review Implementing Context-based similar documents with Python
- Rich Anchor Blog Context-based similar documents
- Rich Anchor Blog Topic modeling with LDA
- Gensim API Tutorial
- scikit-learn User Guide
Xuan-Khoai Pham phamxuankhoai@gmail.com