Skip to content

xiaolin-ninja/simple_job_queue

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Web Scraper Job Queue

Dependencies

Python3
Flask
PostGreSQL
SQLAlchemy
Redis

Getting Started

$ virtualenv env
$ source env/bin/activate
$ pip install -r requirements.txt
$ createdb sites
$ python server.py init

Re-run server after first init:

$ python server.py

Servers runs on: http://localhost:8080/

Usage

Add Task:

curl -X POST http://localhost:8080/new/URL

replace URL with the site you wish to scrape

Example:
curl -X POST http://localhost:8080/new/www.google.com

Please only enter sites in the 'www.domain.tld' format.

Check Status:

curl -X GET http://localhost:8080/status/id

replace ID with task ID returned from /new.

Example:
curl -X GET http://localhost:8080/status/1

File structure

.
├── server.py 			# Flask RESTful web API
├── model.py          	 	 # Database model
└── helpers.py           	 # Worker functions

About

distributed web-scraper task queue / RESTful API

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages