Skip to content

CrazyOrr/dirbot-db

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

dirbot

This is a Scrapy project to scrape websites from public web directories.

This project is licensed under the terms of the MIT license.

Items

The items scraped by this project are websites, and the item is defined in the class:

dirbot.items.Website

See the source code for more details.

Spiders

This project contains one spider called dmoz that you can see by running:

scrapy list

Spider: dmoz

The dmoz spider scrapes the Open Directory Project (dmoz.org), and it's based on the dmoz spider described in the Scrapy tutorial

This spider doesn't crawl the entire dmoz.org site but only a few pages by default (defined in the start_pages attribute). These pages are:

So, if you run the spider regularly (with scrapy crawl dmoz) it will scrape only those two pages.

Pipelines

Filtering by words

A pipeline to filter out websites containing certain forbidden words in their description. This pipeline is defined in the class:

dirbot.pipelines.FilterWordsPipeline

Requiring certain item fields

A pipeline to discard items that lack of certain fields. This pipeline is defined in the class:

dirbot.pipelines.RequiredFieldsPipeline

Storing items into a database

A pipeline to store (insert or update) scraped items in a database. This pipeline is defined in the class:

dirbot.pipelines.DbPipeline

The database schema is defined in db/script.sql and the settings file contains the default DB_* settings values(which is MySQL). The scraped items will be stored in the website table under the dirbot database.

About

Scrapy project based on dirbot to show how to use Twisted's adbapi to store the scraped data in a database.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%