SHARE is creating a free, open dataset of research (meta)data.
http://share-research.readthedocs.io/en/latest/index.html
We'll be expanding this section in the near future, but, beyond using our API for your own purposes, harvesters are a great way to get started. You can find a few that we have in our list here.
It is useful to set up a virtual environment to ensure python3 is your designated version of python and make the python requirements specific to this project.
mkvirtualenv share -p `which python3.5`
workon share
Once in the share
virtual environment, install the necessary requirements, then setup SHARE.
pip install -r requirements.txt
python setup.py develop
docker-compose
assumes Docker is installed and running. Running ./bootstrap.sh
will create and provision the database. If there are any SHARE containers running, make sure to stop them before bootstrapping using docker-compose stop
.
docker-compose build web
docker-compose run --rm web ./bootstrap.sh
Run the API server
docker-compose up -d web
Run Celery
python manage.py celery worker -l DEBUG
This is particularly applicable to running ember-share, an interface for SHARE.
Harvest data from providers, for example
./manage.py harvest com.nature --async
./manage.py harvest com.peerj.preprints --async
Pass data to elasticsearch with runbot
. Rerunning this command will get the most recently harvested data. This can take a minute or two to finish.
./manage.py runbot elasticsearch
cd docs/
pip install -r requirements.txt
make watch
py.test
behave