python中的start_在scrapy框架python的start_urls列表中为url构造…

我对Scrapy非常陌生,而且之前我没有使用过正则表达式

以下是我的spider.py代码

class ExampleSpider(BaseSpider):

name = “test_code

allowed_domains = [“www.example.com”]

start_urls = [

“http://www.example.com/bookstore/new/1?filter=bookstore”,

“http://www.example.com/bookstore/new/2?filter=bookstore”,

“http://www.example.com/bookstore/new/3?filter=bookstore”,

def parse(self, response):

hxs = HtmlXPathSelector(response)

现在,如果我们看一下start_urls,则所有三个url都相同,除了它们的整数值2?,3?不同.依此类推,我的意思是根据网站上显示的URL不受限制,我现在可以使用crawlspider并为URL构造正则表达式,如下所示,

from scrapy.contrib.spiders import CrawlSpider, Rule

from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

import re

class ExampleSpider(CrawlSpider):

name = ‘example.com’

allowed_domains = [‘example.com’]

start_urls = [

“http://www.example.com/bookstore/new/1?filter=bookstore”,

“http://www.example.com/bookstore/new/2?filter=bookstore”,

“http://www.example.com/bookstore/new/3?filter=bookstore”,

rules = (

Rule(SgmlLinkExtractor(allow=(……..),))),

def parse(self, response):

hxs = HtmlXPathSelector(response)

您能否指导我,如何为上述start_url列表构建抓取蜘蛛规则.

Original: https://blog.csdn.net/weixin_42548752/article/details/113652157
Author: 佳丽影像
Title: python中的start_在scrapy框架python的start_urls列表中为url构造…

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/789955/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球