request携带cookie爬虫爬取招聘网站(多页),保存到mysql和sqlite数据库

Cookie,有时也用其复数形式 Cookies。类型为” 小型文本文件“,是某些网站为了辨别用户身份,进行Session跟踪而储存在用户本地终端上的数据(通常经过加密),由用户客户端计算机暂时或永久保存的信息 。

简而言之,cookie代表了你的身份,你是否登陆了,就通过它进行识别,这就很好的解决了当一个爬虫进行数据爬取,要求你是登录状态的时候,以及可以进行数据的爬取

一个基本的爬虫应该包含基础的header即请求头,而请求头中包含了一些爬虫的模拟信息,毕竟一个网站对于一个基础的爬虫可并不友好,我们的cookie也是从中获取

    headers1 = {
        'Accept': 'application/json, text/javascript, */*; q=0.01',
        'Accept-Encoding': 'gzip, deflate, br',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Cache-Control': 'no-cache',
        'Connection': 'keep-alive',
        'Cookie': cookie1,
        'Pragma': 'no-cache',
        'Referer': 'https://xxxx.com',
        'sec-ch-ua': '" Not A;Brand";v="99", "Chromium";v="100", "Google Chrome";v="100"',
        'sec-ch-ua-mobile': '?0',
        'sec-ch-ua-platform': '"Windows"',
        'Sec-Fetch-Dest': 'empty',
        'Sec-Fetch-Mode': 'cors',
        'Sec-Fetch-Site': 'same-origin',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36',
        'X-Requested-With': 'XMLHttpRequest',
    }

header的各个字段的意思

[header各个字段的意思

爬虫其实一共就分为两步,第一,获取数据,第二,把获取到的数据封装保存

往往是第一步最难,因为获取数据的过程可能有反爬,反爬之后还要对数据进行定位,本次我们使用xpath进行定位,简单来说,xpath就是你f12之后看到的网页代码,呈现的是树状结构,xpath就是通过这个树状结构对数据进行定位

xpath教程(菜鸟教程的,感兴趣的可以深入了解一下)

核心爬虫代码

import time
import requests
from lxml.etree import HTML
import os
import pandas as pd
from sqlalchemy import create_engine

def check(a):
    经验=''
    学历=''
    for i in a:
        if '经验' in i or '在校生' in i:
            经验=i
            continue
        if '硕士' in i or '大专' in i or '本科' in i or '高中' in i or '初中' in i or '博士' in i:
            学历 = i
            continue

    return 经验,学历

cookie1=input('请输入列表cookie:')

startpage=input('请输入开始页码:')
endpage=input('请输入开始页码:')

for page in range(int(startpage),int(endpage)):
    url=f'https://xxxx.com/{page}.html?'
    headers1 = {
        'Accept': 'application/json, text/javascript, */*; q=0.01',
        'Accept-Encoding': 'gzip, deflate, br',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Cache-Control': 'no-cache',
        'Connection': 'keep-alive',
        'Cookie': cookie1,
        'Host': 'search.xxxx.com',
        'Pragma': 'no-cache',
        'Referer': 'https://xxxx.com/list?',
        'sec-ch-ua': '" Not A;Brand";v="99", "Chromium";v="100", "Google Chrome";v="100"',
        'sec-ch-ua-mobile': '?0',
        'sec-ch-ua-platform': '"Windows"',
        'Sec-Fetch-Dest': 'empty',
        'Sec-Fetch-Mode': 'cors',
        'Sec-Fetch-Site': 'same-origin',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36',
        'X-Requested-With': 'XMLHttpRequest',
    }
    headers2 = {
        'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
        'Accept-Encoding': 'gzip, deflate, br',
        'Accept-Language': 'zh-CN,zh;q=0.9',
        'Cache-Control': 'no-cache',
        'Connection': 'keep-alive',
        'Cookie': cookie1,
        'Host': 'jobs.xxxx.com',
        'Pragma': 'no-cache',
        'Referer': 'https://xxxx.com',
        'sec-ch-ua': '" Not A;Brand";v="99", "Chromium";v="100", "Google Chrome";v="100"',
        'sec-ch-ua-mobile': '?0',
        'sec-ch-ua-platform': '"Windows"',
        'Sec-Fetch-Dest': 'document',
        'Sec-Fetch-Mode': 'navigate',
        'Sec-Fetch-Site': 'same-origin',
        'Sec-Fetch-User': '?1',
        'Upgrade-Insecure-Requests': '1',
        'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36',
    }

    res=requests.get(url,headers=headers1).json()
    for i in res['engine_jds']:
        company_name=i['company_name']
        job_title=i['job_title']
        providesalary_text=i['providesalary_text']
        updatedate=i['updatedate']
        workarea_text=i['workarea_text']
        attribute_text=i['attribute_text']
        经验, 学历 = check(attribute_text)
        jobwelf=i['jobwelf']
        job_href=i['job_href']
        res_href = requests.get(job_href, headers=headers2).content
        try:
            res_href=res_href.decode('gbk')
        except:
            res_href = res_href.decode('utf8')
        if '滑动验证页面' in res_href:
            print('出现滑动检测请注意****************************************')
        zhiwei=''.join(HTML(res_href).xpath(r'//div[@class="bmsg job_msg inbox"]//text()'))
        leibie = ''.join(HTML(res_href).xpath(r'//div[@class="mt10"]/p[1]/a/text()'))

        data={
            '职位名称': job_title,
            '公司名字': company_name,
            '工作城市': workarea_text,
            '经验要求': 经验,
            '学历要求': 学历,
            '薪资水平': providesalary_text,
            '福利待遇': jobwelf,
            '职位详情页': job_href,
            '职位信息': zhiwei,
            '职能类别': leibie,
        }

        df=pd.DataFrame([data])
        savepath=r'xxxx24hour_1.csv'
        if not os.path.exists(savepath):
            df.to_csv(savepath,index=False,mode='a')
        else:
            df.to_csv(savepath, index=False, mode='a', header=None)
        print(data)
        time.sleep(2)

数据库这个名字很高大上,但是如果深入了解就会发现,数据库就是一个本地有结构的大型存储文件–也就是说,你可以把它理解为一个文件夹,每个数据库是一个文件夹,文件夹下面有很多个excel表格,只是这些表格之间有着其很多的规则

本次我们使用两个数据库,分别是mysql和sqlite:

其实核心逻辑都是一样的,连接到本地sql服务,循环我们的数据执行对应的sql代码(英语好就能直接理解:比如insert into就是插入数据)

import pandas as pd
import pymysql
import csv
import codecs
data = pd.read_csv(file_path, encoding='utf-8')
def get_conn():
    db = pymysql.connect(host='localhost',
                         user='root',
                         password='xxxx',
                         database='xxxx',
                         charset='utf8')
    return db
def insert(cur, sql, args):
    try:
        cur.execute(sql, args)
    except Exception as e:
        print(e)
def read_csv_to_mysql(filename):
    '''
    csv文件->数据库
    :param filename:
    :return:
    '''
    with codecs.open(filename=filename, mode='r', encoding='utf-8') as f:
        reader = csv.reader(f)
        head = next(reader)

        conn = get_conn()
        cur = conn.cursor()
        sql = 'insert into job(job_name, company_name, job_city, expirence, degree, salary_degree, walfare, work_detail, work_information, work_type) values(%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)'
        for item in reader:

            args = tuple(item)

            insert(cur, sql=sql, args=args)
        conn.commit()
        cur.close()
        conn.close()
if __name__ == '__main__':
    read_csv_to_mysql("new_xxxx.csv")
import sqlite3
import time
import pandas as pd
import pymysql
import csv
import codecs

class DbOperate(object):
    def __new__(cls, *args, **kwargs):
        if not hasattr(cls, "_instance"):
            cls._instance = super(DbOperate, cls).__new__(cls)
        return cls._instance

    def __init__(self, db_name):
        self.db_name = db_name
        self.connect = sqlite3.connect(self.db_name)
        self.cursor = self.connect.cursor()

    def __enter__(self):
        return self

    def __exit__(self, exc_type, exc_val, exc_tb):
        self.connect.close()

    def execute_sql(self, sql,values):
        try:
            self.cursor.execute(sql,values)
            self.connect.commit()
        except Exception as e:
            self.connect.rollback()

    def executemany_sql(self, sql, data_list):

        try:
            self.cursor.executemany(sql, data_list)
            self.connect.commit()
        except Exception as e:
            self.connect.rollback()
            raise Exception("executemany failed")

sqlite_path = "../my.db"

with DbOperate(sqlite_path) as db:
    sql = "insert into job(job_name, company_name, job_city, expirence, degree, salary_degree, walfare, work_detail, work_information, work_type,province) values(?,?,?,?,?,?,?,?,?,?,?)"

    with codecs.open(filename="new_xxxx.csv", mode='r', encoding='utf8') as f:
        reader = csv.reader(f)
        head = next(reader)

        lis = []
        for item in reader:
            lis.append(tuple(item))
        db.executemany_sql(sql, tuple(lis))

sqlite的优点在于可以直接带走,类似于钱包

对于数据库没有特别大的需求的话(非大量数据比如千万级别)其实sqlite是比较友好的

本文做过敏感处理,需要代码欢迎github自取

Original: https://blog.csdn.net/weixin_63587281/article/details/127928659
Author: 大佬爱睡觉
Title: request携带cookie爬虫爬取招聘网站(多页),保存到mysql和sqlite数据库

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/817177/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球