Skip to content

houda96/VisualSem

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VisualSem Thesis-based

Introduction

This repository contains code to recreate VisualSem, a multi-modal multi-lingual knowledge graph as well as code to use VisualSem for representation learning and see how VisualSem relates to the MSCOCO dataset. Since these are two separate constructions, they have their own READMEs. This contains code from my master thesis. If you are looking for the official shared paper code, I refer you to this GitHub

Disclaimer

This multi-modal knowledge base is derived from BabelNet and by so, only accessible for research purposes.

Environment

For both projects within this repository, creating VisualSem and the representation learning, the same environment can be used.

Create an environment in Python 3.6 and install the requirements as below.

python3.6 -m venv venv_name
pip install -r requirements.txt

VisualSem

If you are interested in re-creating the VisualSem dataset, I refer you to this README.

Representation Learning

If you already have VisualSem and want to use it for representation learning, I refer you to this README.

Citation

If you found this repository useful, please cite our paper.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published