Skip to content

tomarraj008/visual-commonsense-pytorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 

Repository files navigation

visual-commonsense-pytorch

For visual commonsense model. Paper: https://arxiv.org/pdf/1811.10830.pdf

Note that this is unofficial implementation.

Steps for getting Bert-results:

  1. I am using pytorch pretrained bert from huggingface here: https://github.com/huggingface/pytorch-pretrained-BERT. So that needs to be installed first.
  2. Download data from http://visualcommonsense.com/download.html. You should have two folders - images and annotations.
  3. cd code
  4. Change the cfg.json file to set the vcr_tdir
  5. python bert_main.py "some_unique_id" --task_type $T. The task type can be QA or QA_R. Q_AR is under progress.

Results:

QA gives 59% on validation set, QA_R gives 66%, both of which are higher than what is reported in the paper (53% and 64%). I am not sure why this the case though. Any inputs are welcome.

About

For visual commonsense model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.2%
  • Python 0.8%