This Scrapy project uses Redis and Kafka to create a distributed on demand scraping cluster.
The goal is to distribute seed URLs among many waiting spider instances, whose requests are coordinated via Redis. Any other crawls those trigger, as a result of frontier expansion or depth traversal, will also be distributed among all workers in the cluster.
The input to the system is a set of Kafka topics and the output is a set of Kafka topics. Raw HTML and assets are crawled interactively, spidered, and output to the log. For easy local development, you can also disable the Kafka portions and work with the spider entirely via Redis, although this is not recommended due to the serialization of the crawl requests.
Please see the requirements.txt
within each sub project for Pip package dependencies.
Other important components required to run the cluster
This project tries to bring together a bunch of new concepts to Scrapy and large scale distributed crawling in general. Some bullet points include:
To set up a pre-canned Scrapy Cluster test environment, make sure you have the latest Virtualbox + Vagrant >= 1.7.4 installed. Vagrant will automatically mount the base scrapy-cluster directory to the /vagrant directory, so any code changes you make will be visible inside the VM. Please note that at time of writing this will not work on a Windows machine.
vagrant up
in base scrapy-cluster directory.vagrant ssh
to ssh into the VM.sudo supervisorctl status
to check that everything is running.virtualenv sc
to create a virtual environmentsource sc/bin/activate
to activate the virtual environmentcd /vagrant
to get to the scrapy-cluster directory.pip install -r requirements.txt
to install Scrapy Cluster dependencies../run_offline_tests.sh
to run offline tests../run_online_tests.sh
to run online tests (relies on kafka, zookeeper, redis).Please check out the official Scrapy Cluster 1.2.1 documentation for more information on how everything works!
The master
branch of this repository contains the latest stable release code for Scrapy Cluster 1.2.1
.
The dev
branch contains bleeding edge code and is currently working towards Scrapy Cluster 1.3. Please note that not everything may be documented, finished, tested, or finalized but we are happy to help guide those who are interested.