如何用DETR(detection transformer)训练自己的数据集

DETR(detection transformer)简介

DETR是Facebook AI的研究者提出的Transformer的视觉版本,是CNN和transformer的融合,实现了端到端的预测,主要用于目标检测和全景分割。
DETR的Github地址:https://github.com/facebookresearch/detr
DETR的论文地址:https://arxiv.org/pdf/2005.12872.pdf

DETR训练自己数据集

数据准备

DETR需要coco数据集才可以进行训练,需要将数据标签和图片保存为如下格式:

如何用DETR(detection transformer)训练自己的数据集
其中,annotations是如下json文件,
如何用DETR(detection transformer)训练自己的数据集
test、train和val2017存储的只有图片。
那么要如何得到coco数据集格式的文件呢,接下来我提供两种方法:

; coco数据集获取

1、labelme打好json文件后转换为coco格式数据集
2、roboflow标注后直接生成coco格式数据集(需要连外网,需要的联系我可以免费给你提供好用的外网扩展程序)。roboflow网址:https://app.roboflow.com/
然后介绍如何用labelme转换数据集,首先在cmd python环境或者在pycharm终端输入pip install labelme,下载好后输入labelme进入打标签页面,打好标签后生成json文件,再运行如下脚本:

import argparse
import json
import matplotlib.pyplot as plt
import skimage.io as io
from labelme import utils
import numpy as np
import glob
import PIL.Image

class MyEncoder(json.JSONEncoder):
    def default(self, obj):
        if isinstance(obj, np.integer):
            return int(obj)
        elif isinstance(obj, np.floating):
            return float(obj)
        elif isinstance(obj, np.ndarray):
            return obj.tolist()
        else:
            return super(MyEncoder, self).default(obj)

class labelme2coco(object):
    def __init__(self, labelme_json=[], save_json_path='./tran.json'):
        self.labelme_json = labelme_json
        self.save_json_path = save_json_path
        self.images = []
        self.categories = []
        self.annotations = []

        self.label = []
        self.annID = 1
        self.height = 0
        self.width = 0

        self.save_json()

    def data_transfer(self):
        for num, json_file in enumerate(self.labelme_json):
            with open(json_file, 'r') as fp:
                data = json.load(fp)
                self.images.append(self.image(data, num))
                for shapes in data['shapes']:
                    label = shapes['label']
                    if label not in self.label:
                        self.categories.append(self.categorie(label))
                        self.label.append(label)
                    points = shapes['points']
                    points.append([points[0][0], points[1][1]])
                    points.append([points[1][0], points[0][1]])
                    self.annotations.append(self.annotation(points, label, num))
                    self.annID += 1

    def image(self, data, num):
        image = {}
        img = utils.img_b64_to_arr(data['imageData'])

        height, width = img.shape[:2]
        img = None
        image['height'] = height
        image['width'] = width
        image['id'] = num + 1
        image['file_name'] = data['imagePath'].split('/')[-1]

        self.height = height
        self.width = width
        return image

    def categorie(self, label):
        categorie = {}
        categorie['supercategory'] = 'Cancer'
        categorie['id'] = len(self.label) + 1
        categorie['name'] = label
        return categorie

    def annotation(self, points, label, num):
        annotation = {}
        annotation['segmentation'] = [list(np.asarray(points).flatten())]
        annotation['iscrowd'] = 0
        annotation['image_id'] = num + 1

        annotation['bbox'] = list(map(float, self.getbbox(points)))
        annotation['area'] = annotation['bbox'][2] * annotation['bbox'][3]

        annotation['category_id'] = self.getcatid(label)
        annotation['id'] = self.annID
        return annotation

    def getcatid(self, label):
        for categorie in self.categories:
            if label == categorie['name']:
                return categorie['id']
        return 1

    def getbbox(self, points):

        polygons = points
        mask = self.polygons_to_mask([self.height, self.width], polygons)
        return self.mask2box(mask)

    def mask2box(self, mask):
        '''从mask反算出其边框
        mask:[h,w]  0、1组成的图片
        1对应对象,只需计算1对应的行列号(左上角行列号,右下角行列号,就可以算出其边框)
        '''

        index = np.argwhere(mask == 1)
        rows = index[:, 0]
        clos = index[:, 1]

        left_top_r = np.min(rows)
        left_top_c = np.min(clos)

        right_bottom_r = np.max(rows)
        right_bottom_c = np.max(clos)

        return [left_top_c, left_top_r, right_bottom_c - left_top_c,
                right_bottom_r - left_top_r]
    def polygons_to_mask(self, img_shape, polygons):
        mask = np.zeros(img_shape, dtype=np.uint8)
        mask = PIL.Image.fromarray(mask)
        xy = list(map(tuple, polygons))
        PIL.ImageDraw.Draw(mask).polygon(xy=xy, outline=1, fill=1)
        mask = np.array(mask, dtype=bool)
        return mask

    def data2coco(self):
        data_coco = {}
        data_coco['images'] = self.images
        data_coco['categories'] = self.categories
        data_coco['annotations'] = self.annotations
        return data_coco

    def save_json(self):
        self.data_transfer()
        self.data_coco = self.data2coco()

        json.dump(self.data_coco, open(self.save_json_path, 'w'), indent=4, cls=MyEncoder)

labelme_json = glob.glob(r'./*.json')

labelme2coco(labelme_json, '.\\instances_val2017.json')

这个脚本是我之前在别人CSDN找的,比较好用。

预训练文件下载

有了数据集后,为了加快学习速度,可以去官网下载预训练模型,官网提供的有resnet_50和resnet_101两个预训练版本,下载后得到pth文件。下载如下:

如何用DETR(detection transformer)训练自己的数据集

; 修改detr-main文件的一些配置

因为detr是针对的是91(数字可能错了,不是记得了)个目标进行预测,所以我们在进行预测的时候,需要把目标预测数目改为自己的需要检测目标的数目。首先需要修改上一步下载好的pth文件,运行如下脚本:

import torch
model1  = torch.load('detr-r101-2c7b67e5.pth')

num_class = 2
model1["model"]["class_embed.weight"].resize_(num_class+1, 256)
model1["model"]["class_embed.bias"].resize_(num_class+1)
torch.save(model1, "detr-r50_test_%d.pth"%num_class)

然后还需要修改detr.py文件夹下的num_classes,

如何用DETR(detection transformer)训练自己的数据集

训练模型

训练模型这块,可以直接执行命令行,或者在main.py里面修改好参数后运行,
官方提供的命令行如下:

python -m torch.distributed.launch --nproc_per_node=8 --use_env main.py --coco_path /path/to/coco

结束语

我觉得在训练那块还是改main.py文件比较好,需要改的地方挺多,我觉得需要修改的主要有–epoch(轮次)、–num_workers(主要看你电脑性能怎么样,好点可以调高些)、–output_dir(输出的模型权重,pth文件)、–dataset_file(数据存放位置)、–coco_path(coco数据集的位置)和–resume(预训练权重文件位置)。
还有一点是,官方只提供了训练脚本,没有预测脚本,其实预测脚本也很简单,就是加载模型、加载权重参数,然后传入图像预处理等。代码太多,内容太多。我写了两个预测剧本。如果你需要它,你可以联系我,或者如果你不能运行它,你可以问我。事实上,这很简单。只要再放几次就行了。

[En]

Another point is that the official only provides the training script, but there is no prediction script, in fact, the prediction script is also quite simple, that is, load the model, load the weight parameters, and then pass in the image preprocessing and so on. There is a lot of code, and there is too much content. I wrote two prediction scripts. If you need it, you can contact me, or if you can’t run it, you can ask me. In fact, it’s quite simple. Just play it a few more times.

最后展示下效果吧,预测的还是挺准的

如何用DETR(detection transformer)训练自己的数据集

Original: https://blog.csdn.net/weixin_50233398/article/details/121785953
Author: 小小凡sir
Title: 如何用DETR(detection transformer)训练自己的数据集

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/10542/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

免费咨询
免费咨询
扫码关注
扫码关注
联系站长

站长Johngo!

大数据和算法重度研究者!

持续产出大数据、算法、LeetCode干货,以及业界好资源!

2022012703491714

微信来撩,免费咨询:xiaozhu_tec

分享本页
返回顶部