Java Code Examples for edu.uci.ics.crawler4j.crawler.CrawlConfig#setIncludeHttpsPages()

The following examples show how to use edu.uci.ics.crawler4j.crawler.CrawlConfig#setIncludeHttpsPages() . You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: VsController.java    From visual-spider with MIT License 5 votes vote down vote up
/**
 * 初始化
 *
 * @param numberOfCrawlers 爬虫线程数
 * @param maxDepthOfCrawling 抓取深度
 * @param maxPagesToFetch 最大抓取页数
 * @param politenessDelay 延迟
 * @param links 待爬取链接
 */
public void init(int numberOfCrawlers, int maxDepthOfCrawling, int maxPagesToFetch, int politenessDelay, String[]
        links) {
    this.numberOfCrawlers = numberOfCrawlers;
    CrawlConfig config = new CrawlConfig();
    config.setCrawlStorageFolder(DefaultConfigValues.CRAWL_STORAGE_FOLDER);
    config.setMaxDepthOfCrawling(maxDepthOfCrawling);
    config.setIncludeHttpsPages(true);
    config.setMaxPagesToFetch(maxPagesToFetch);
    config.setIncludeBinaryContentInCrawling(false);
    config.setPolitenessDelay(politenessDelay);
    config.setUserAgentString(DefaultConfigValues.USER_AGENT);
    config.setResumableCrawling(true);

    if (com.zhazhapan.vspider.models.CrawlConfig.getTurnOnProxy().get()) {
        LOGGER.info("open proxy");
        config.setProxyHost(com.zhazhapan.vspider.models.CrawlConfig.getProxyServer().get());
        config.setProxyPort(Formatter.stringToInt(com.zhazhapan.vspider.models.CrawlConfig.getProxyPort().get()));
        config.setProxyUsername(com.zhazhapan.vspider.models.CrawlConfig.getProxyUser().get());
        config.setProxyPassword(com.zhazhapan.vspider.models.CrawlConfig.getProxyPass().get());
    }

    PageFetcher pageFetcher = new PageFetcher(config);
    RobotstxtConfig robotstxtConfig = new RobotstxtConfig();
    robotstxtConfig.setEnabled(false);
    RobotstxtServer robotstxtServer = new RobotstxtServer(robotstxtConfig, pageFetcher);
    try {
        controller = new CrawlController(config, pageFetcher, robotstxtServer);
        for (String link : links) {
            if (Checker.isHyperLink(link)) {
                controller.addSeed(link);
            }
        }
        isInited = true;
    } catch (Exception e) {
        LOGGER.error("start to crawl urls error: " + e.getMessage());
    }
}
 
Example 2
Source File: Crawler.java    From scava with Eclipse Public License 2.0 5 votes vote down vote up
public Crawler(File storing, List<String> urlSeeds, int maxDept, int maxPages)
{
	logger = (OssmeterLogger) OssmeterLogger.getLogger("nlp.tools.webcrawler");
	CrawlConfig config = new CrawlConfig();
	config.setIncludeHttpsPages(true);
	config.setPolitenessDelay(1000);
	config.setCrawlStorageFolder(storing.toString());
	config.setMaxDepthOfCrawling(maxDept);
       config.setMaxPagesToFetch(maxPages);       
       
       createCrawler(config, storing, urlSeeds);
}
 
Example 3
Source File: Crawler.java    From scava with Eclipse Public License 2.0 5 votes vote down vote up
public Crawler(File storing, List<String> urlSeeds, int maxDept, int maxPages,String loginURL, String username, String password, String usernameFieldName, String passwordFieldName) throws MalformedURLException
{
	logger = (OssmeterLogger) OssmeterLogger.getLogger("nlp.tools.webcrawler");
	CrawlConfig config = new CrawlConfig();
	config.setIncludeHttpsPages(true);
	config.setPolitenessDelay(1000);
	config.setCrawlStorageFolder(storing.toString());
	config.setMaxDepthOfCrawling(maxDept);
	config.addAuthInfo(createAuthethicator(username, password, loginURL, usernameFieldName, passwordFieldName));
       config.setMaxPagesToFetch(maxPages);       
       
       createCrawler(config, storing, urlSeeds);
}
 
Example 4
Source File: Crawler.java    From scava with Eclipse Public License 2.0 5 votes vote down vote up
public Crawler(File storing, List<String> urlSeeds)
{
	logger = (OssmeterLogger) OssmeterLogger.getLogger("nlp.tools.webcrawler");
	CrawlConfig config = new CrawlConfig();
	config.setIncludeHttpsPages(true);
	config.setPolitenessDelay(1000);
	config.setCrawlStorageFolder(storing.toString());
	config.setMaxDepthOfCrawling(-1);
       config.setMaxPagesToFetch(-1);       
       
       createCrawler(config, storing, urlSeeds);
}
 
Example 5
Source File: Crawler.java    From scava with Eclipse Public License 2.0 5 votes vote down vote up
public Crawler(File storing, List<String> urlSeeds, String loginURL, String username, String password, String usernameFieldName, String passwordFieldName) throws MalformedURLException
{
	logger = (OssmeterLogger) OssmeterLogger.getLogger("nlp.tools.webcrawler");
	CrawlConfig config = new CrawlConfig();
	config.setIncludeHttpsPages(true);
	config.setPolitenessDelay(1000);
	config.setCrawlStorageFolder(storing.toString());
	config.setMaxDepthOfCrawling(-1);
       config.setMaxPagesToFetch(-1);       
       config.addAuthInfo(createAuthethicator(username, password, loginURL, usernameFieldName, passwordFieldName));
       createCrawler(config, storing, urlSeeds);
}