This repository contains code to recreate VisualSem, a multi-modal multi-lingual knowledge graph as well as code to use VisualSem for representation learning and see how VisualSem relates to the MSCOCO dataset. Since these are two separate constructions, they have their own READMEs. This contains code from my master thesis. If you are looking for the official shared paper code, I refer you to this GitHub
This multi-modal knowledge base is derived from BabelNet and by so, only accessible for research purposes.
For both projects within this repository, creating VisualSem and the representation learning, the same environment can be used.
Create an environment in Python 3.6 and install the requirements as below.
python3.6 -m venv venv_name
pip install -r requirements.txt
If you are interested in re-creating the VisualSem dataset, I refer you to this README.
If you already have VisualSem and want to use it for representation learning, I refer you to this README.
If you found this repository useful, please cite our paper.