An automated ingestion service for blogs to construct a corpus for NLP research.
This quick start is intended to get you setup with Baleen in development mode (since the project is still under development). If you'd like to run Baleen in production, please see the documentation.
-
Clone the repository
$ git clone git@github.com:bbengfort/baleen.git $ cd baleen ```
-
Create a virtualenv and install the dependencies
$ virtualenv venv $ source venv/bin/activate $ pip install -r requirements.txt ```
-
Add the
baleen
module to your$PYTHONPATH
via the virtualenv.
$ echo $(pwd) > venv/lib/python2.7/site-packages/baleen.pth ```
-
Create your local configuration file. Edit it with the connection details to your local MongoDB server. This is also a good time to check and make sure that you can create a database called Baleen on Mongo.
$ cp conf/baleen-example.yaml conf/baleen.yaml ```
```yaml
debug: true testing: false database: host: localhost port: 27017 name: baleen ```
-
Run the tests to make sure everything is ok.
$ make test ```
-
Make sure that the command line utility is ready to go:
$ bin/baleen --help ```
-
Import the feeds from the
feedly.opml
file in the fixtures.
$ bin/baleen import fixtures/feedly.opml Ingested 101 feeds from 1 OPML files ```
-
Perform an ingestion of the feeds that were imported from the
feedly.opml
file.
$ bin/baleen ingest ```
Your Mongo database collections should be created as you add new documents to them, and at this point you're ready to develop!
Baleen is a tool for ingesting formal natural language data from the discourse of professional and amateur writers: e.g. bloggers and news outlets. Rather than performing web scraping, Baleen focuses on data ingestion through the use of RSS feeds. It performs as much raw data collection as it can, saving data into a Mongo document store.
The image used in this README, "Space Whale" by hbitik is licensed under CC BY-NC-ND 3.0