Skip to content

aborsu/sift

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

83 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

sift - Text modelling framework

sift is a toolkit for extracting models of entities and text from a corpus of linked documents.

What can it do?

sift is written in python, runs on Spark and is completely modular.

Out of the box, you can:

  • Convert wikipedia articles into json objects without all the mediawiki cruft
  • Extract entity relations from wikidata and align them with wikipedia mentions
  • Model entity popularity, alternative names and relatedness using inlinks
  • Preprocess text documents for machine learning pipelines
  • Push output into datastores like MongoDB and Redis

Quick Start

Install

pip install git+http://git@github.com/wikilinks/sift.git

Getting Started

To use sift, you'll need some data.

If you'd like to use Wikipedia data, sift includes a helper script for downloading the latest dumps.

Download the latest paritioned Wikipedia dump into the 'latest' directory.

download-wikipedia latest

Once you've got some data, take a look at the sample notebook: sift.ipynb.

Spark

sift uses Spark to process corpora in parallel.

If you'd like to make use of an existing Spark cluster, ensure the SPARK_HOME environment variable is set.

If not, that's fine. sift will prompt you to download and run Spark locally, utilising multiple cores on your system.

About

Distributed text modelling

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.1%
  • Jupyter Notebook 9.0%
  • Shell 2.9%