from scrapy.crawler import CrawlerRunner from spiders.my_spider import MySpider runner = CrawlerRunner() runner.crawl(MySpider) runner.join()
from scrapy.settings import Settings from scrapy.crawler import CrawlerRunner from spiders.my_spider import MySpider settings = Settings({"FEED_FORMAT": "json", "FEED_URI": "output.json"}) runner = CrawlerRunner(settings) runner.crawl(MySpider) runner.join()
from scrapy.crawler import CrawlerRunner from spiders.my_spider1 import MySpider1 from spiders.my_spider2 import MySpider2 runner = CrawlerRunner() runner.crawl(MySpider1) runner.crawl(MySpider2) runner.join()This example shows how to run multiple spiders using the CrawlerRunner. We create a CrawlerRunner object and use it to start crawling two different spiders, MySpider1 and MySpider2. We then wait for both spiders to finish scraping before closing the runner. Overall, the scrapy.crawler.CrawlerRunner package library is a useful tool for running multiple spiders efficiently in Python.