【Python自学笔记】新手爬虫必备!!Scrapy中item pipelines管道的使用(清洗数据&保存数据)!

基于 Python Scrapy_实现的豆瓣电影 _数据_采集 _爬虫_系统 含 _数据_库SQL和全部源代码 # –– coding: utf-8 –" " " @Author : nesta @Email : 572645517@qq.com @Software: PyCharm @project : movie @File : MovieSpider.py @Time : 2018/4/26 9:18 " " " from _scrapy.spiders import Spider from scrapy.http import Request from scrapy.selector import Selector from movie. item_s import Movie _Item class MovieSpider(Spider): name = 'movie ' url = u 'https://movie.douban.com/top250 ' start_urls = [u 'https://movie.douban.com/top250 '] def parse(self, response): item = Movie Item() selector = Selector(response) # 解析 movies = selector.xpath('//div[@class= &_quot;info "] ') for movie in movies: title = movie.xpath('div[@class= "hd "]/a/span/text() ').extract() fullTitle = ' ' for each in title: fullTitle += each movieInfo = movie.xpath('div[@class= "bd "]/p/text() ').extract() star = movie.xpath('div[@class= "bd "]/div[@class= "star "]/span[@class= "rating_num "]/text() ').extract()[0] quote = movie.xpath('div[@class= "bd "]/p/span/text() ').extract() if quote: quote = quote[0] else: quote = ' ' _item['title '] = fullTitle item['movieInfo '] = '; '.join(movieInfo).replace(' ', ' ').replace('\n ', ' ') item['star '] = star[0] item['quote '] = quote yield item nextPage = selector.xpath('//span[@class= &_quot;next "]/link/@href &_#39;).extract() if nextPage: nextPage = nextPage[0] print(self.url + str(nextPage)) yield Request(self.url + str(nextPage), callback=self.parse)

Original: https://blog.csdn.net/xiaoqiangclub/article/details/117810666
Author: xiaoqiangclub
Title: 【Python自学笔记】新手爬虫必备!!Scrapy中item pipelines管道的使用(清洗数据&保存数据)!

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/790209/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球