Skip to content

abiraja2004/transformer-summarization-Pytorch-Pointer-Generator-Recurrent-NN

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyTorch Attentive Summarization NN

Model description

Summarization model for short texts based on pure transformer model with bpe encoding.

Requirements

  • Python 3.5 or higher
  • PyTorch 0.4.0 or higher
  • sentencepiece
  • gensim
  • tensorboardX

Usage

Preprocessing

First of all, each part of dataset must stores in common folder and .tsv files, e.g. ./dataset/train.tsv and ./dataset/test.tsv (./dataset/sample.tsv by default uses for sampling summarizations). Each line of the document has two parts: source and target separated by tab symbol. Tabular files don't need headers. Preprocessor build vocabulary and train word embeddings.

For preprocess use:

$ python preprocess.py

Training

For train model use

$ python train.py --cuda --pretrain_emb

After training model is saving into ./models_dumps/ directory.

You can tune model with lots arguments available in model. If you want a good result, use much deeper configuration of the model! Default configuration is used for tests.

Sampling

For generate summarizations use

$ python sample.py --inp=sample_part --out=output_file.txt

Where sample_part is dataset part (e.g. ./dataset/sample_part.tsv).

See help of each module for more information and available arguments.

About

Abstractive summarization model based on pure transformer architecture

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%