Run the project locally:
heroku local:start -f Procfile_dev
Run any manage.py cmds:
heroku local:run python manage.py <cmd>
TODO:
- rewrite README.md
- media files to external storage
- Postgres instead of sqlite in PROD
- Postgres instead of sqlite in DEV
- global vars from os.environ
- easier way to start crawler
- REST Endpoint to start crawler
- Refactor crawler
- migrate data
- django-jet
- crawler cronjob (jet dashboard?)
- document crawler API
- setup email config
- migrate pip & virtualenv to pipenv
- api: json-API to access data
- crawler: Webcrawler to get teams, fixtures and results from suisserugby.com
- swissrugby: back- and frontend for the webapplication
- docs: documentation
We recently migrated from pip and virtualenv to pipenv.
If you don't have pipenv installed yet, install it via
pip install pipenv
Then run
pipenv install
to setup the virtualenv and install all the packages.
To activate the virtualenv in the shell, just run
pipenv shell
without logging:
python manage.py crawl_and_update > /dev/null 2> /dev/null &
python manage.py update_statistics
pipenv lock -> requirements.txt
Create a script that runs your crawler, i.e. named update_srs.sh with the following content
srsdir="/path/to/your/installation"
cd $srsdir
source env/bin/activate
python manage.py crawl_and_update > /path/to/your/script/update_srs.log 2> /path/to/your/script/update_srs_err.log
python manage.py update_statistics
Open your crontab for editing with crontab -e
and add the following lines:
SHELL=/bin/bash # important! otherwise the source command to use virtualenv won't work!
# get latest data from suisserugby.com
0 3 * * * /path/to/script/update_srs.sh