Skip to content

Code for 2016 Project to annotate Wikipedia corpus with Freebase KnowledgeBase

Notifications You must be signed in to change notification settings

abarthakur/create_distantly_supervised_dataset

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Project Overview

This project contains code to annotate the Wikipedia Corpus with Freebase or DBPedia. It takes doc files created by wikiextractor and queries all hyperlinked words in it to check if they are entities. Thereafter checks all pairs of entities in a sentence by querying if they have relations between them.

See this link for how to load Freebase RDF dumps into Virtuoso for a SPARQL endpoint to use with this code.

Usage

  1. Download XML dumps of Wikipedia from here and place in data/raw
  2. Install WikiExtractor.py from the repo.
  3. Extract the dump into individual files using ```WikiExtractor.py`` like this (-l to preserve links)
python WikiExtractor.py -l enwiki-20160920-pages-articles-multistream.xml
  1. Run StanfordCoreNLP and DBPedia Spotlight servers.
  2. Run create.py with directory containing output of Step 3 as input
python create.py ../data/raw ../data/processed/json_output

Run Stanford CoreNLP

'''bash java -mx8g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -annotators tokenize,ssplit,pos,ner -ner.model edu/stanford/nlp/models/ner/english.conll.4class.distsim.crf.ser.gz -ner.useSUTime false -ner.applyNumericClassifiers false -port 9000 -timeout 30000 -quiet > /dev/null 2>&1 '''

Run Dbpedia Spotlight

'''bash java -jar dbpedia-spotlight-latest.jar en http://localhost:9999/rest > /dev/null 2>&1 '''

About

Code for 2016 Project to annotate Wikipedia corpus with Freebase KnowledgeBase

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published