Skip to content

pdamodaran/baleen

 
 

Repository files navigation

Baleen

An automated ingestion service for blogs to construct a corpus for NLP research.

Build Status Coverage Status Documentation Status Stories in Ready

Space Whale

Quick Start

This quick start is intended to get you setup with Baleen in development mode (since the project is still under development). If you'd like to run Baleen in production, please see the documentation.

  1. Clone the repository

$ git clone git@github.com:bbengfort/baleen.git $ cd baleen ```

  1. Create a virtualenv and install the dependencies

$ virtualenv venv $ source venv/bin/activate $ pip install -r requirements.txt ```

  1. Add the baleen module to your $PYTHONPATH via the virtualenv.

$ echo $(pwd) > venv/lib/python2.7/site-packages/baleen.pth ```

  1. Create your local configuration file. Edit it with the connection details to your local MongoDB server. This is also a good time to check and make sure that you can create a database called Baleen on Mongo.

$ cp conf/baleen-example.yaml conf/baleen.yaml ```

```yaml

debug: true testing: false database: host: localhost port: 27017 name: baleen ```

  1. Run the tests to make sure everything is ok.

$ make test ```

  1. Make sure that the command line utility is ready to go:

$ bin/baleen --help ```

  1. Import the feeds from the feedly.opml file in the fixtures.

$ bin/baleen import fixtures/feedly.opml Ingested 101 feeds from 1 OPML files ```

  1. Perform an ingestion of the feeds that were imported from the feedly.opml file.

$ bin/baleen ingest ```

Your Mongo database collections should be created as you add new documents to them, and at this point you're ready to develop!

About

Baleen is a tool for ingesting formal natural language data from the discourse of professional and amateur writers: e.g. bloggers and news outlets. Rather than performing web scraping, Baleen focuses on data ingestion through the use of RSS feeds. It performs as much raw data collection as it can, saving data into a Mongo document store.

Throughput

Throughput Graph

Attribution

The image used in this README, "Space Whale" by hbitik is licensed under CC BY-NC-ND 3.0

About

An automated ingestion service for blogs to construct a corpus for NLP research.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.4%
  • Makefile 0.6%