0


python爬虫进阶篇:Scrapy中使用Selenium+Firefox浏览器爬取沪深A股股票行情

一、前言

上篇记录了Scrapy搭配selenium的使用方法,有了基本的了解后我们可以将这项技术落实到实际需求中。目前很多股票网站的行情信息都是动态数据,我们可以用Scrapy+selenium对股票进行实时采集并持久化,再进行数据分析、邮件通知等操作。

二、环境搭建

详情请看上篇笔记

三、代码实现

  • items
  1. classStockSpiderItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()# 股票代码
  2. stock_code = scrapy.Field()# 股票名称
  3. stock_name = scrapy.Field()# 最新价
  4. last_price = scrapy.Field()# 涨跌幅
  5. rise_fall_rate = scrapy.Field()# 涨跌额
  6. rise_fall_price = scrapy.Field()
  • middlewares
  1. def__init__(self):# ----------------firefox的设置------------------------------- #
  2. self.options = firefox_options()defspider_opened(self, spider):
  3. spider.logger.info('Spider opened: %s'% spider.name)
  4. spider.driver = webdriver.Firefox(options=self.options)# 指定使用的浏览器defprocess_request(self, request, spider):# Called for each request that goes through the downloader# middleware.# Must either:# - return None: continue processing this request# - or return a Response object# - or return a Request object# - or raise IgnoreRequest: process_exception() methods of# installed downloader middleware will be called
  5. spider.driver.get("https://quote.eastmoney.com/center/gridlist.html#hs_a_board")returnNonedefprocess_response(self, request, response, spider):# Called with the response returned from the downloader.# Must either;# - return a Response object# - return a Request object# - or raise IgnoreRequest
  6. response_body = spider.driver.page_source
  7. return HtmlResponse(url=request.url, body=response_body, encoding='utf-8', request=request)
  • settings设置
  1. # See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
  2. SPIDER_MIDDLEWARES ={'stock_spider.middlewares.StockSpiderSpiderMiddleware':543,}# Enable or disable downloader middlewares# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
  3. DOWNLOADER_MIDDLEWARES ={'stock_spider.middlewares.StockSpiderDownloaderMiddleware':543,}
  • spider文件
  1. defparse(self, response):# 股票代码
  2. stock_code = response.css("table.table_wrapper-table tbody tr td:nth-child(2) a::text").extract()# 股票名称
  3. stock_name = response.css("table.table_wrapper-table tbody tr td:nth-child(3) a::text").extract()# 最新价
  4. last_price = response.css("table.table_wrapper-table tbody tr td:nth-child(5) span::text").extract()# 涨跌幅
  5. rise_fall_rate = response.css("table.table_wrapper-table tbody tr td:nth-child(6) span::text").extract()# 涨跌额
  6. rise_fall_price = response.css("table.table_wrapper-table tbody tr td:nth-child(7) span::text").extract()for i inrange(len(stock_code)):
  7. item = StockSpiderItem()
  8. item["stock_code"]= stock_code[i]
  9. item["stock_name"]= stock_name[i]
  10. item["last_price"]= last_price[i]
  11. item["rise_fall_rate"]= rise_fall_rate[i]
  12. item["rise_fall_price"]= rise_fall_price[i]yield item
  13. defclose(self, spider):
  14. spider.driver.quit()
  • pipelines持久化
  1. defprocess_item(self, item, spider):"""
  2. 接收到提交过来的对象后,写入csv文件
  3. """
  4. filename =f'stock_info.csv'withopen(filename,'a+', encoding='utf-8')as f:
  5. line = item["stock_code"]+","+ item["stock_name"]+","+ item["last_price"]+","+ \
  6. item["rise_fall_rate"]+","+ item["rise_fall_price"]+"\n"
  7. f.write(line)return item
  • readme文件
  1. 1.安装依赖包
  2. - python 3.0+- pip install -r requirements.txt
  3. 2.将最第二层stock_spider文件夹设置为根目录
  4. 3.firefox驱动程序包放到python环境的Scripts文件夹里
  5. 4.必须要安装firefox浏览器才会调用到浏览器
  6. 5.执行spider_main.py文件启动爬虫
标签: python 爬虫 scrapy

本文转载自: https://blog.csdn.net/qq_23730073/article/details/135158686
版权归原作者 code_space 所有, 如有侵权,请联系我们删除。

“python爬虫进阶篇:Scrapy中使用Selenium+Firefox浏览器爬取沪深A股股票行情”的评论:

还没有评论