简介ResNet18并用其对CIFAR-10数据集进行分类

ResNet,是2015年何恺明大佬发表在CVPR上的一篇文章,运用了残差连接这个概念。该论文一出,直接引爆了整个cv界。并且在2016年ImageNet上ResNet获得第一名。而ResNet至今被用在AI各个领域内的前沿技术当中。

要是我以后的论文引用量有ResNet的十分之一我就满足了(笑)

ResNet介绍

ResNet解决的是深度网络的退化问题。按常理讲,网络越深模型就能拟合更复杂的结果。但是在实际训练中,模型一旦加深效果不一定会好很有可能会产生拟合效果差,梯度消失等缺点。比如论文中展示的在CIFAR-10上20层CNN和56层CNN测试精度。由图可知,56层CNN的精度还比20层CNN的精度差。

简介ResNet18并用其对CIFAR-10数据集进行分类

在训练过程中,网络回传时是得到每一层网络的梯度再相乘。而越训练到后期或者较深的网络,它的梯度都非常的小,这样相乘后最后得到的总梯度也就很少甚至接近于0。为了解决这一问题,何博士在论文中提出了 残差学习这一概念。

残差学习

当我们需要在一个网络的基础上再加几层网络时,常规的做法是直接在后面加网络原先网络的输出做加上网络的输入。但现在我们不这样做,根据残差学习当新网络输入为x时其学习到的特征记为 H(x) ,现在我们希望新网络可以学习到残差值 F(x)=H(x)-x ,这样其实原始的学习特征是 F(x)+x 。也就是说再最后的输出时,我们还是需要在F(x)的基础上加上x。

简介ResNet18并用其对CIFAR-10数据集进行分类

在原网络的基础加上新增网络,容易使网络退化梯度变得非常小。而将其输出改为残差值和网络值相加时,在求梯度时就不会有产生小梯度的值。因为在求导时式子中有一个x,众所周知我们对变量进行求导时,x的导数就是1。也可以浅显的说,现在对该层的网络求导得到的梯度是一个小梯度再加上一个1。这样就增加了梯度的值而弥补了梯度会消失的缺点。当然残差梯度不会那么巧全为1,而且就算其比较小,有1的存在也不会导致梯度消失。所以残差学习会更容易。

网络结构

采用了类似与VGG的网络,并在其基础上进行了改进,并通过短路机制加入了残差单元。基本的单元结构还是卷积,BN,激活函数这个套路。但在每个单元的输出位置加上了残差连接,单元输出再加上单元输入最后通过一个激活函数做为最后的输出。

简介ResNet18并用其对CIFAR-10数据集进行分类

而对于不同层的ResNet来说,残差单元的结构也不相同

简介ResNet18并用其对CIFAR-10数据集进行分类

在小于50层时一般残差单元内只有两层卷积,并且一个卷积是33卷积核大小然后填充是1不改变feature map的大小,而另一个卷积则将大小缩小一半这个操作为了不使信息丢失的太多将feature map的通道数增大一倍,而且也降低网络的复杂性。大于50层时,先用了一个11的卷积层将feature map的通道数映射回我需要的通道数,再通过与上述一样的3*3改变大小的卷积层。最后再通过一个将通道数乘四倍的卷积层。从图中可以看到,ResNet相比普通网络每两层间增加了短路机制,这就形成了残差学习,其中虚线表示feature map数量发生了改变。

简介ResNet18并用其对CIFAR-10数据集进行分类

Pytorch实现ResNet

import torch
import time
from torch import nn

初始的卷积层,对输入的图片进行处理成feature map
class Conv1(nn.Module):
    def __init__(self,inp_channels,out_channels,stride = 2):
        super(Conv1,self).__init__()
        self.net = nn.Sequential(
            nn.Conv2d(inp_channels,out_channels,kernel_size=7,stride=stride,padding=3,bias=False),# 卷积的结果(i - k + 2*p)/s + 1,此时图像大小缩小一半
            nn.BatchNorm2d(out_channels),
            nn.ReLU(inplace=True),
            nn.MaxPool2d(kernel_size=3,stride=2,padding=1)# 根据卷积的公式,该feature map尺寸变为原来的一半
        )

    def forward(self,x):
        y = self.net(x)
        return y

class Simple_Res_Block(nn.Module):
    def __init__(self,inp_channels,out_channels,stride=1,downsample = False,expansion_=False):
        super(Simple_Res_Block,self).__init__()
        self.downsample = downsample
        if expansion_:
            self.expansion = 4# 将维度扩展成expansion倍
        else:
            self.expansion = 1

        self.block = nn.Sequential(
            nn.Conv2d(inp_channels,out_channels,kernel_size=3,stride=stride,padding=1),
            nn.BatchNorm2d(out_channels),
            nn.ReLU(inplace=True),
            nn.Conv2d(out_channels,out_channels*self.expansion,kernel_size=3,padding=1),
            nn.BatchNorm2d(out_channels*self.expansion)
        )
        if self.downsample:
            self.down = nn.Sequential(
                nn.Conv2d(inp_channels,out_channels*self.expansion,kernel_size=1,stride=stride,bias=False),
                nn.BatchNorm2d(out_channels*self.expansion)
            )
        self.relu = nn.ReLU(inplace=True)

    def forward(self,input):
        residual = input
        x = self.block(input)
        if self.downsample:
            residual = self.down(residual)# 使x和h的维度相同

        out = residual + x
        out = self.relu(out)
        return out

class Residual_Block(nn.Module):
    def __init__(self,inp_channels,out_channels,stride=1,downsample = False,expansion_=False):
        super(Residual_Block,self).__init__()
        self.downsample = downsample# 判断是否对x进行下采样使x和该模块输出值维度通道数相同
        if expansion_:
            self.expansion = 4# 将维度扩展成expansion倍
        else:
            self.expansion = 1

        # 模块
        self.conv1 = nn.Conv2d(inp_channels,out_channels,kernel_size=1,stride=1,bias=False)# 不对特征图尺寸发生改变,起映射作用
        self.drop = nn.Dropout(0.5)
        self.BN1 = nn.BatchNorm2d(out_channels)
        self.conv2 = nn.Conv2d(out_channels,out_channels,kernel_size=3,stride=stride,padding=1,bias=False)# 此时卷积核大小和填充大小不会影响特征图尺寸大小,由步长决定
        self.BN2 = nn.BatchNorm2d(out_channels)
        self.conv3 = nn.Conv2d(out_channels,out_channels*self.expansion,kernel_size=1,stride=1,bias=False)# 改变通道数
        self.BN3 = nn.BatchNorm2d(out_channels*self.expansion)
        self.relu = nn.ReLU(inplace=True)

        if self.downsample:
            self.down = nn.Sequential(
                nn.Conv2d(inp_channels,out_channels*self.expansion,kernel_size=1,stride=stride,bias=False),
                nn.BatchNorm2d(out_channels*self.expansion)
            )

    def forward(self,input):
        residual = input
        x = self.relu(self.BN1(self.conv1(input)))
        x = self.relu(self.BN2(self.conv2(x)))
        h = self.BN3(self.conv3(x))

        if self.downsample:
            residual = self.down(residual)# 使x和h的维度相同
        out = h + residual# 残差部分
        out = self.relu(out)
        return out

class Resnet(nn.Module):
    def __init__(self,net_block,block,num_class = 1000,expansion_=False):
        super(Resnet,self).__init__()
        self.expansion_ = expansion_
        if expansion_:
            self.expansion = 4# 将维度扩展成expansion倍
        else:
            self.expansion = 1

        # 输入的初始图像经过的卷积
        # (3*64*64) --> (64*56*56)
        self.conv = Conv1(3,64)

        # 构建模块
        # (64*56*56) --> (256*56*56)
        self.block1 = self.make_layer(net_block,block[0],64,64,expansion_=self.expansion_,stride=1)# stride为1,不改变尺寸大小
        # (256*56*56) --> (512*28*28)
        self.block2 = self.make_layer(net_block,block[1],64*self.expansion,128,expansion_=self.expansion_,stride=2)
        # (512*28*28) --> (1024*14*14)
        self.block3 = self.make_layer(net_block,block[2],128*self.expansion,256,expansion_=self.expansion_,stride=2)
        # (1024*14*14) --> (2048*7*7)
        self.block4 = self.make_layer(net_block,block[3],256*self.expansion,512,expansion_=self.expansion_,stride=2)

        self.avgPool = nn.AvgPool2d(7,stride=1)# (2048*7*7) --> (2048*1*1)经过平均池化层将所有像素融合并取平均
        if expansion_:
            length = 2048
        else:
            length = 512
        self.linear = nn.Linear(length,num_class)

        for m in self.modules():
            if isinstance(m, nn.Conv2d):
                nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
            elif isinstance(m, nn.BatchNorm2d):
                nn.init.constant_(m.weight, 1)
                nn.init.constant_(m.bias, 0)

    def make_layer(self,net_block,layers,inp_channels,out_channels,expansion_=False,stride = 1):
        block = []
        block.append(net_block(inp_channels,out_channels,stride=stride,downsample=True,expansion_=expansion_))# 先将上一个模块的通道数缩小为该模块需要的通道数
        if expansion_:
            self.expansion = 4
        else:
            self.expansion = 1
        for i in range(1,layers):
            block.append(net_block(out_channels*self.expansion,out_channels,expansion_=expansion_))
        return nn.Sequential(*block)

    def forward(self,x):
        x = self.conv(x)
        x = self.block1(x)
        x = self.block2(x)
        x = self.block3(x)
        x = self.block4(x)

        # x = self.avgPool(x)
        x = x.view(x.shape[0],-1)
        x = self.linear(x)

        return x

def Resnet18():
    return Resnet(Simple_Res_Block,[2,2,2,2],num_class=10,expansion_=False)# 此时每个模块里面只有两层卷积

def Resnet34():
    return Resnet(Simple_Res_Block,[3,4,6,3],num_class=10,expansion_=False)

def Resnet50():
    return Resnet(Residual_Block,[3,4,6,3],expansion_=True)# 也叫50层resnet,这个网络有16个模块,每个模块有三层卷积,最后还剩下初始的卷积和最后的全连接层,总共50层

def Resnet101():
    return Resnet(Residual_Block,[3,4,23,3],expansion_=True)

def Resnet152():
    return Resnet(Residual_Block,[3,8,36,3],expansion_=True)

其中包括了ResNet18,34,50,101,152。

对CIFAR-10进行分类

基于cifar10或cifar100的训练
import torch
import os
import time
import torchvision
import tqdm
import numpy as np
from torch.utils.data import Dataset,DataLoader
from ResNet import Resnet18,Resnet34,Resnet50,Resnet101,Resnet152
from visualizer import Vis

class opt():
    model_name = 'Resnet18'
    save_path = 'checkpoints'
    save_name = 'lastest_param.pth'
    device = 'cuda'
    batch_size = 128
    learning_rate = 0.001
    epoch = 60
    state_file = 'checkpoints/result/lastest_param.pth'
    load_f = True
    classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
    train_transform = torchvision.transforms.Compose([
        torchvision.transforms.RandomCrop(32,padding=4),
        torchvision.transforms.RandomHorizontalFlip(p=0.5),
        torchvision.transforms.ToTensor(),
        torchvision.transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))
    ])
    test_transform = torchvision.transforms.Compose([
        torchvision.transforms.ToTensor(),
        torchvision.transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))
    ])

def load_save(model,load_f = False):
    if load_f:
        state = torch.load(opt.state_file)
        model.load_state_dict(state)
        return model
    else:
        return model

model
if opt.model_name == "Resnet18":
    model = Resnet18()
    model.to(opt.device)
elif opt.model_name == "Resnet34":
    model = Resnet34()
    model.to(opt.device)
elif opt.model_name == "Resnet50":
    model = Resnet50()
    model.to(opt.device)
load_save(model,opt.load_f)

dataset
train_dataset = torchvision.datasets.CIFAR10(
    root = 'data',
    train = True,
    transform = opt.train_transform,
    download=True
)

test_dataset = torchvision.datasets.CIFAR10(
    root = 'data',
    train = False,
    transform = opt.test_transform,
    download=True
)

dataloader
train_loader = DataLoader(
    train_dataset,
    batch_size=opt.batch_size,
    shuffle=True,
    num_workers=6
)

test_loader = DataLoader(
    test_dataset,
    batch_size=100,
    shuffle=False,
    num_workers=6
)

loss
loss_fn = torch.nn.CrossEntropyLoss()# 交叉熵
优化器
optim = torch.optim.SGD(model.parameters(),lr=opt.learning_rate,momentum=0.9,weight_decay=5e-4)# 对权重做衰减,也就是给损失函数加一个l2正则项,若模型没有较好收敛,则降低参数
flag = 0

def reverse_norm(img,mean=None,std=None):
    imgs = []
    for i in range(img.size(0)):
        image = img[i].data.cpu().numpy().transpose(1, 2, 0)
        if (mean is not None) and (std is not None):
            image = (image * std + mean) * 255
        else:  # 如果只是经过了ToTensor()
            image = image * 255
        imgs.append(image.transpose(2,0,1))
    return np.stack(imgs)

for epoch in range(opt.epoch):
    now = time.time()
    print('---epoch{}---'.format(epoch))
    model.train()
    loss_epoch = 0
    true_pre_epoch = 0
    correct = 0

    for i,(img,label) in enumerate(tqdm.tqdm(train_loader)):
        img,label = img.to(opt.device),label.to(opt.device)
        output = model(img)

        loss = loss_fn(output,label)
        loss.backward()
        optim.step()
        optim.zero_grad()
        flag += 1
        loss_epoch += loss.data

        pre = torch.argmax(output, dim=1)
        num_true = (pre == label).sum()
        true_pre_epoch += num_true
        correct += label.shape[0]

        if (i+1)%100 == 0:
            print('epoch {} iter {} loss : {}'.format(epoch,i+1,loss_epoch/(i+1)))
        if (i+1)%200 == 0:
            acc = true_pre_epoch/correct
            print('epoch {} iter {} train_acc : {}'.format(epoch,i+1,acc))

            imgs = reverse_norm(img,mean=(0.4914, 0.4822, 0.4465),std=(0.2023, 0.1994, 0.2010))
            # 可视化
            vis = Vis()
            vis.linee(Y=loss_epoch/(i+1),X=flag,win='loss')
            vis.linee(Y=acc,X=flag,win='acc')
            vis.Image(imgs,pre,opt.classes)

    # save
    model_path = os.path.join(opt.save_path,opt.save_name)
    torch.save(model.state_dict(),model_path)

    # test
    model.eval()
    num = 0
    labels = 0
    for img ,label in test_loader:
        img, label = img.to(opt.device), label.to(opt.device)
        output = model(img)

        num += (torch.argmax(output,dim=1).data == label.data).sum()
        labels += label.shape[0]
    fin = time.time()
    print('epoch {} test_acc : {}   运行一个epoch花费时间:{}s'.format(epoch,num/labels,fin-now))

结果

简介ResNet18并用其对CIFAR-10数据集进行分类

测试精度

`

Original: https://blog.csdn.net/mikasa1028/article/details/121255568
Author: mikasa1028
Title: 简介ResNet18并用其对CIFAR-10数据集进行分类

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/664388/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球