卷积神经网络实现图像识别

卷积神经网络实现图像识别

*
项目简介
项目效果展示
程序运行流程图
代码使用说明
数据集准备

+ 训练集
+ 测试集
搭建神经网络
训练函数
测试函数
模型-训练过程完整代码

+ 模型保存使用的是torch.save(model,src),model即须保存的模型,src即模型保存的位置,后缀为pth
模型-调用完整代码

+ 模型调用使用,torch.load(src)

项目简介

目的: 实现昆虫的图像分类,同时该模型也可以用于其他图像的分类识别,只需传入相应的训练集进行训练,保存为另一个模型即可,进行调用使用。
配置环境: pycharm(python3.7),导入pytotch库
知识预备: 需要了解卷积神经网络的基本原理与结构,熟悉pytorch的使用,csdn有很多介绍卷积神经网络的文章,可查阅。
例如:

https://blog.csdn.net/yunpiao123456/article/details/52437794
https://blog.csdn.net/weipf8/article/details/103917202

算法设计思路:
(1) 收集数据集,利用 python 的 requests 库和 bs4 进行网络爬虫,下载数据集
(2) 搭建卷积神经网络
(3)对卷积神经网络进行训练
(4) 改进训练集与测试集,并扩大数据集
(5) 保存模型
(6) 调用模型进行测试

项目效果展示

卷积神经网络实现图像识别
卷积神经网络实现图像识别
注,模型我达到的最高正确率在85%,最后稳定在79%,中间出现了过拟合,可减少训练次数进行优化,数据集较少的情况下,建议训练10次就可。

; 程序运行流程图

卷积神经网络实现图像识别

代码使用说明

先训练模型,进行模型保存之后可对模型进行调用,不用每使用一次模型就要进行训练。文末有项目的完整代码:修改自己的数据集src位置,一般情况下能正常运行,如果不能,请检查自己的第三方库是否成功安装,以及是否成功导入。若有问题可以私信交流学习。

数据集准备

注:由于爬虫,会有一些干扰数据,所以我这里展示的是进行数据清洗之后的数据。
注:训练集:测试集=7:3(可自己修改)
注:若正确率不理想,可扩大数据集,数据清洗,图片处理等方面进行改进

训练集

卷积神经网络实现图像识别
部分数据展示
卷积神经网络实现图像识别
卷积神经网络实现图像识别

; 测试集

文件格式与训练集一样。

搭建神经网络

框架:

卷积神经网络实现图像识别
结构:
卷积神经网络实现图像识别
代码实现:

class ConvNet(nn.Module):
    def __init__(self):
        super(ConvNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 32, 3)
        self.max_pool1 = nn.MaxPool2d(2)
        self.conv2 = nn.Conv2d(32, 64, 3)
        self.max_pool2 = nn.MaxPool2d(2)
        self.conv3 = nn.Conv2d(64, 64, 3)
        self.conv4 = nn.Conv2d(64, 64, 3)
        self.max_pool3 = nn.MaxPool2d(2)
        self.conv5 = nn.Conv2d(64, 128, 3)
        self.conv6 = nn.Conv2d(128, 128, 3)
        self.max_pool4 = nn.MaxPool2d(2)
        self.fc1 = nn.Linear(4608, 512)
        self.fc2 = nn.Linear(512, 1)

    def forward(self, x):
        in_size = x.size(0)
        x = self.conv1(x)
        x = F.relu(x)
        x = self.max_pool1(x)
        x = self.conv2(x)
        x = F.relu(x)
        x = self.max_pool2(x)
        x = self.conv3(x)
        x = F.relu(x)
        x = self.conv4(x)
        x = F.relu(x)
        x = self.max_pool3(x)
        x = self.conv5(x)
        x = F.relu(x)
        x = self.conv6(x)
        x = F.relu(x)
        x = self.max_pool4(x)

        x = x.view(in_size, -1)
        x = self.fc1(x)
        x = F.relu(x)
        x = self.fc2(x)
        x = torch.sigmoid(x)
        return x

训练函数

def train(model, device, train_loader, optimizer, epoch):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):

        data, target = data.to(device), target.to(device).float().unsqueeze(1)

        optimizer.zero_grad()

        output = model(data)

        loss = F.binary_cross_entropy(output, target)

        loss.backward()

        optimizer.step()

        if (batch_idx + 1) % 1 == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(

                epoch, (batch_idx + 1) * len(data), len(train_loader.dataset),

                       100. * (batch_idx + 1) / len(train_loader), loss.item()))

测试函数

def test(model, device, test_loader):
    model.eval()

    test_loss = 0

    correct = 0

    with torch.no_grad():
        for data, target in test_loader:
            data, target = data.to(device), target.to(device).float().unsqueeze(1)

            output = model(data)

            test_loss += F.binary_cross_entropy(output, target, reduction='mean').item()
            pred = torch.tensor([[1] if num[0] >= 0.5 else [0] for num in output]).to(device)
            correct += pred.eq(target.long()).sum().item()

        print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
            test_loss, correct, len(test_loader.dataset),
            100. * correct / len(test_loader.dataset)))

模型-训练过程完整代码

模型保存使用的是torch.save(model,src),model即须保存的模型,src即模型保存的位置,后缀为pth

import torch.nn.functional as F
import torch.optim as optim
import torch
import torch.nn as nn
import torch.nn.parallel
from PIL import Image
import torch.optim
import torch.utils.data
import torch.utils.data.distributed
import torchvision.transforms as transforms
import torchvision.datasets as datasets

BATCH_SIZE = 20

EPOCHS = 10

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')

transform = transforms.Compose([
    transforms.Resize(100),
    transforms.RandomVerticalFlip(),
    transforms.RandomCrop(50),
    transforms.RandomResizedCrop(150),
    transforms.ColorJitter(brightness=0.5, contrast=0.5, hue=0.5),
    transforms.ToTensor(),
    transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
])

dataset_train = datasets.ImageFolder('D:\\cnn_net\\train\\insects', transform)

dataset_test = datasets.ImageFolder('D:\\cnn_net\\train\\test', transform)

test_loader = torch.utils.data.DataLoader(dataset_test, batch_size=BATCH_SIZE, shuffle=True)

classess=dataset_train.classes
class_to_idxes=dataset_train.class_to_idx
print(class_to_idxes)

train_loader = torch.utils.data.DataLoader(dataset_train, batch_size=BATCH_SIZE, shuffle=True)

class ConvNet(nn.Module):
    def __init__(self):
        super(ConvNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 32, 3)
        self.max_pool1 = nn.MaxPool2d(2)
        self.conv2 = nn.Conv2d(32, 64, 3)
        self.max_pool2 = nn.MaxPool2d(2)
        self.conv3 = nn.Conv2d(64, 64, 3)
        self.conv4 = nn.Conv2d(64, 64, 3)
        self.max_pool3 = nn.MaxPool2d(2)
        self.conv5 = nn.Conv2d(64, 128, 3)
        self.conv6 = nn.Conv2d(128, 128, 3)
        self.max_pool4 = nn.MaxPool2d(2)
        self.fc1 = nn.Linear(4608, 512)
        self.fc2 = nn.Linear(512, 1)

    def forward(self, x):
        in_size = x.size(0)
        x = self.conv1(x)
        x = F.relu(x)
        x = self.max_pool1(x)
        x = self.conv2(x)
        x = F.relu(x)
        x = self.max_pool2(x)
        x = self.conv3(x)
        x = F.relu(x)
        x = self.conv4(x)
        x = F.relu(x)
        x = self.max_pool3(x)
        x = self.conv5(x)
        x = F.relu(x)
        x = self.conv6(x)
        x = F.relu(x)
        x = self.max_pool4(x)

        x = x.view(in_size, -1)
        x = self.fc1(x)
        x = F.relu(x)
        x = self.fc2(x)
        x = torch.sigmoid(x)
        return x

modellr = 1e-4

model = ConvNet().to(device)
print(model)

optimizer = optim.Adam(model.parameters(), lr=modellr)

def adjust_learning_rate(optimizer, epoch):
    """Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
    modellrnew = modellr * (0.1 ** (epoch // 5))
    print("lr:", modellrnew)
    for param_group in optimizer.param_groups:
        param_group['lr'] = modellrnew

def train(model, device, train_loader, optimizer, epoch):
    model.train()
    for batch_idx, (data, target) in enumerate(train_loader):

        data, target = data.to(device), target.to(device).float().unsqueeze(1)

        optimizer.zero_grad()

        output = model(data)

        loss = F.binary_cross_entropy(output, target)

        loss.backward()

        optimizer.step()

        if (batch_idx + 1) % 1 == 0:
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(

                epoch, (batch_idx + 1) * len(data), len(train_loader.dataset),

                       100. * (batch_idx + 1) / len(train_loader), loss.item()))

def test(model, device, test_loader):
    model.eval()

    test_loss = 0

    correct = 0

    with torch.no_grad():
        for data, target in test_loader:
            data, target = data.to(device), target.to(device).float().unsqueeze(1)

            output = model(data)

            test_loss += F.binary_cross_entropy(output, target, reduction='mean').item()
            pred = torch.tensor([[1] if num[0] >= 0.5 else [0] for num in output]).to(device)
            correct += pred.eq(target.long()).sum().item()

        print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
            test_loss, correct, len(test_loader.dataset),
            100. * correct / len(test_loader.dataset)))

for epoch in range(1, EPOCHS + 1):
    adjust_learning_rate(optimizer, epoch)
    train(model, device, train_loader, optimizer, epoch)
    test(model, device, test_loader)

torch.save(model, 'D:\\cnn_net\\datas\\model_insects.pth')

模型-调用完整代码

模型调用使用,torch.load(src)


from PIL import Image

from torchvision import transforms
import torch.nn.functional as F

import torch
import torch.nn as nn
import torch.nn.parallel

class ConvNet(nn.Module):
    def __init__(self):
        super(ConvNet, self).__init__()
        self.conv1 = nn.Conv2d(3, 32, 3)
        self.max_pool1 = nn.MaxPool2d(2)
        self.conv2 = nn.Conv2d(32, 64, 3)
        self.max_pool2 = nn.MaxPool2d(2)
        self.conv3 = nn.Conv2d(64, 64, 3)
        self.conv4 = nn.Conv2d(64, 64, 3)
        self.max_pool3 = nn.MaxPool2d(2)
        self.conv5 = nn.Conv2d(64, 128, 3)
        self.conv6 = nn.Conv2d(128, 128, 3)
        self.max_pool4 = nn.MaxPool2d(2)
        self.fc1 = nn.Linear(4608, 512)
        self.fc2 = nn.Linear(512, 1)

    def forward(self, x):
        in_size = x.size(0)
        x = self.conv1(x)
        x = F.relu(x)
        x = self.max_pool1(x)
        x = self.conv2(x)
        x = F.relu(x)
        x = self.max_pool2(x)
        x = self.conv3(x)
        x = F.relu(x)
        x = self.conv4(x)
        x = F.relu(x)
        x = self.max_pool3(x)
        x = self.conv5(x)
        x = F.relu(x)
        x = self.conv6(x)
        x = F.relu(x)
        x = self.max_pool4(x)

        x = x.view(in_size, -1)
        x = self.fc1(x)
        x = F.relu(x)
        x = self.fc2(x)
        x = torch.sigmoid(x)
        return x

class_names = ['瓢虫','螳螂',]

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

model = torch.load('D:\\cnn_net\\datas\\model_insects.pth')
model.eval()

image_PIL = Image.open('D:\\cnn_net\\train\\insects\\瓢虫\\p49.jpg')

transform_test = transforms.Compose([
    transforms.Resize(100),
    transforms.RandomVerticalFlip(),
    transforms.RandomCrop(50),
    transforms.RandomResizedCrop(150),
    transforms.ColorJitter(brightness=0.5, contrast=0.5, hue=0.5),
    transforms.ToTensor(),
    transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])
    ])

image_tensor = transform_test(image_PIL)

image_tensor.unsqueeze_(0)

image_tensor = image_tensor.to(device)

out = model(image_tensor)

pred = torch.tensor([[1] if num[0] >= 0.5 else [0] for num in out]).to(device)
print(class_names[pred])

有错误的地方欢迎大家交流学习,进行指正,一起学习进步。

Original: https://blog.csdn.net/Satenga/article/details/122341233
Author: 是aaaa阿腾阿
Title: 卷积神经网络实现图像识别

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/671502/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球