【PyHacker编写指南】打造网站后台扫描器

这节课是巡安似海PyHacker编写指南的《打造网站后台扫描器》

包括如何处理假的200页面/404智能判断等

喜欢用Python写脚本的小伙伴可以跟着一起写一写呀。

编写环境:Python2.x

00×1:

需要用到的模块如下:

import request

00×2:

先将请求的基本代码写出来:

python;gutter:true; import requests</p> <p>def dir(url): headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'} req = requests.get(url=url,headers=headers) print req.status_code</p> <p>dir('http://www.hackxc.cc')</p> <pre><code> **00x3:** 设置超时时间,以及忽略不信任证书 > </code></pre> <p>import urllib3urllib3.disable_warnings()req = requests.get(url=url,headers=headers,timeout=3,verify=False)</p> <pre><code> </code></pre> <p>再加个异常处理</p> <pre><code> 调试一下 再进行改进,如果为200则输出 ;gutter:true;
if req.status_code==200:
print "[*]",req.url

00×4:

难免会碰到假的200页面,我们再处理一下

处理思路:

首先访问hackxchackxchackxc.php和xxxxxxxxxx记录下返回的页面的内容长度,然后在后来的扫描中,返回长度等于这个长度的判定为404

def dirsearch(u,dir):
    try:
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'}
        #假的200页面进行处理
        hackxchackxchackxc = '/hackxchackxchackxc.php'
        hackxchackxchackxc_404 =requests.get(url=u+hackxchackxchackxc,headers=headers)
        # print len(hackxchackxchackxc_404.content)
        xxxxxxxxxxxx = '/xxxxxxxxxxxx'
        xxxxxxxxxxxx_404 = requests.get(url=u + xxxxxxxxxxxx, headers=headers)
        # print len(xxxxxxxxxxxx_404.content)

        #正常扫描
        req = requests.get(url=u+dir,headers=headers,timeout=3,verify=False)
        # print len(req.content)
        if req.status_code==200:
            if len(req.content)!=len(hackxchackxchackxc_404.content)and len(req.content)!= len(xxxxxxxxxxxx_404.content):
                print "[+]",req.url
            else:
                print u+dir,404
    except:
        pass

很nice

00×5:

再让结果自动保存

0x06:

完整代码:

python;gutter:true;</p> <h1>!/usr/bin/python</h1> <h1>-<em>- coding:utf-8 -</em>-</h1> <p>import requests import urllib3 urllib3.disable_warnings()</p> <p>urls = [] def dirsearch(u,dir): try: headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'} #假的200页面进行处理 hackxchackxchackxc = '/hackxchackxchackxc.php' hackxchackxchackxc_404 =requests.get(url=u+hackxchackxchackxc,headers=headers) # print len(hackxchackxchackxc_404.content) xxxxxxxxxxxx = '/xxxxxxxxxxxx' xxxxxxxxxxxx_404 = requests.get(url=u + xxxxxxxxxxxx, headers=headers) # print len(xxxxxxxxxxxx_404.content)</p> <pre><code> #正常扫描 req = requests.get(url=u+dir,headers=headers,timeout=3,verify=False) # print len(req.content) if req.status_code==200: if len(req.content)!=len(hackxchackxchackxc_404.content)and len(req.content)!= len(xxxxxxxxxxxx_404.content): print "[+]",req.url with open('success_dir.txt','a+')as f: f.write(req.url+"\n") else: print u+dir,404 else: print u + dir, 404 except: pass </code></pre> <p>if <strong>name</strong> == '<strong>main</strong>': url = raw_input('\nurl:') print "" if 'http' not in url: url = 'http://'+url dirpath = open('rar.txt','r') for dir in dirpath.readlines(): dir = dir.strip() dirsearch(url,dir)

喜欢的朋友们点个关注叭~

Original: https://www.cnblogs.com/XunanSec/p/pyhacker_houtai.html
Author: 巡安似海
Title: 【PyHacker编写指南】打造网站后台扫描器

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/603123/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球