博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
python之scrapy爬取某集团招聘信息
阅读量:5363 次
发布时间:2019-06-15

本文共 4854 字,大约阅读时间需要 16 分钟。

1、创建工程

scrapy startproject gosuncn

2、创建项目

cd gosuncn scrapy genspider gaoxinxing gosuncn.zhiye.com

3、运行项目

crawl gaoxinxing

4、gaoxinxing.py代码

# -*- coding: utf-8 -*-import scrapyimport logginglogger = logging.getLogger(__name__)#引入日志class GaoxinxingSpider(scrapy.Spider):    name = 'gaoxinxing'    allowed_domains = ['gosuncn.zhiye.com']    start_urls = ['http://gosuncn.zhiye.com/Social']    next_page_num = 1    def parse(self, response):        tr_list = response.xpath("//table[@class='jobsTable']/tr")[1:]        #print(tr_list)        for tr in tr_list:            item = {}            item["position"]=tr.xpath(".//td[1]/a/text()").extract_first()            item["platform"] = tr.xpath(".//td[3]/text()").extract_first()            item["num"] = tr.xpath(".//td[4]/text()").extract_first()            item["time"] = tr.xpath(".//td[6]/text()").extract_first()            logger.warning(item) #打印日志            yield item        #next_page_url = response.xpath("//div[@class='pager2']//a[@class='next']/@href").extract_first()        #print(next_page_url)        self.next_page_num = self.next_page_num+1        if self.next_page_num<=4:            next_url = "http://gosuncn.zhiye.com/social/?PageIndex=" + str(self.next_page_num)            print(next_url)            yield scrapy.Request(                next_url,                callback=self.parse            )
View Code

5、settings.py文件

# -*- coding: utf-8 -*-# Scrapy settings for gosuncn project## For simplicity, this file contains only settings considered important or# commonly used. You can find more settings consulting the documentation:##     https://doc.scrapy.org/en/latest/topics/settings.html#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#     https://doc.scrapy.org/en/latest/topics/spider-middleware.htmlBOT_NAME = 'gosuncn'SPIDER_MODULES = ['gosuncn.spiders']NEWSPIDER_MODULE = 'gosuncn.spiders'LOG_LEVEL="WARNING"# Crawl responsibly by identifying yourself (and your website) on the user-agent#USER_AGENT = 'gosuncn (+http://www.yourdomain.com)'# Obey robots.txt rulesROBOTSTXT_OBEY = True# Configure maximum concurrent requests performed by Scrapy (default: 16)#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay# See also autothrottle settings and docs#DOWNLOAD_DELAY = 3# The download delay setting will honor only one of:#CONCURRENT_REQUESTS_PER_DOMAIN = 16#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)#COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)#TELNETCONSOLE_ENABLED = False# Override the default request headers:#DEFAULT_REQUEST_HEADERS = {
# 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',# 'Accept-Language': 'en',#}# Enable or disable spider middlewares# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html#SPIDER_MIDDLEWARES = {
# 'gosuncn.middlewares.GosuncnSpiderMiddleware': 543,#}# Enable or disable downloader middlewares# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#DOWNLOADER_MIDDLEWARES = {
# 'gosuncn.middlewares.GosuncnDownloaderMiddleware': 543,#}# Enable or disable extensions# See https://doc.scrapy.org/en/latest/topics/extensions.html#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,#}# Configure item pipelines# See https://doc.scrapy.org/en/latest/topics/item-pipeline.htmlITEM_PIPELINES = { 'gosuncn.pipelines.GosuncnPipeline': 300,}LOG_LEVEL ="WARNING"LOG_FILE = "./log.log"# Enable and configure the AutoThrottle extension (disabled by default)# See https://doc.scrapy.org/en/latest/topics/autothrottle.html#AUTOTHROTTLE_ENABLED = True# The initial download delay#AUTOTHROTTLE_START_DELAY = 5# The maximum download delay to be set in case of high latencies#AUTOTHROTTLE_MAX_DELAY = 60# The average number of requests Scrapy should be sending in parallel to# each remote server#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0# Enable showing throttling stats for every response received:#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings#HTTPCACHE_ENABLED = True#HTTPCACHE_EXPIRATION_SECS = 0#HTTPCACHE_DIR = 'httpcache'#HTTPCACHE_IGNORE_HTTP_CODES = []#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
View Code

6、pipelines.py文件

# -*- coding: utf-8 -*-# Define your item pipelines here## Don't forget to add your pipeline to the ITEM_PIPELINES setting# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.htmlclass GosuncnPipeline(object):    def process_item(self, item, spider):        print(item)        return item
View Code

 

转载于:https://www.cnblogs.com/ywjfx/p/11080099.html

你可能感兴趣的文章
c#接口
查看>>
MyEclipse部署Jboss出现java.lang.OutOfMemoryError: PermGen space
查看>>
ZOJ 1133
查看>>
HIVE和HADOOP的一些东西
查看>>
alibaba / zeus 安装 图解
查看>>
Planned Delivery Time as Work Days (SCN discussion)
查看>>
Ubuntu:让桌面显示回收站
查看>>
Android上传头像代码,相机,相册,裁剪
查看>>
git 安装体验
查看>>
Oracle 给已创建的表增加自增长列
查看>>
《DSP using MATLAB》Problem 2.17
查看>>
if 循环
查看>>
uva 111 History Grading(lcs)
查看>>
Python学习week2-python介绍与pyenv安装
查看>>
php判断网页是否gzip压缩
查看>>
一个有意思的js实例,你会吗??[原创]
查看>>
sql server中bit字段实现取反操作
查看>>
Part3_lesson2---ARM指令分类学习
查看>>
jQuery拖拽原理实例
查看>>
JavaScript 技巧与高级特性
查看>>