目录
本文内容来源于《动手深度学习》一书。跟着沐神做kaggle比赛。
注:以下代码都在jupyter中完成。
1、比赛介绍:
该任务是: 给出树叶的图片,将给出的树叶分成176类。
数据如下图所示,通过下面的网址,下载训练数据和测试数据:
比赛地址及数据下载地址:https://www.kaggle.com/c/classify-leaves/data
; 2、数据划分:
在开始之前,我们先看看数据集长什么样子:
其中.csv中存放的是图片的地址和对应的标签。
首先我们读取一下.csv中的数据,看看长什么样子:
import os
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
import collections
import math
import os
import shutil
import pandas as pd
import torch
import torchvision
from torch import nn
from d2l import torch as d2l
train_data = pd.read_csv(r"../data/classify-leaves/train.csv")
test_data = pd.read_csv(r"../data/classify-leaves/test.csv")
print(train_data.shape)
print(test_data.shape)
train_data
输出:
test_data
输出:
我们可以观察到,train.csv中存放了训练数据集的图片相对地址和对应的标签,而test.csv中只存放了测试数据集的图片相对地址,标签需要我们预测么,当然没有了。
对数据有了一个直观的理解之后,接下来我们开始读取数据: 读取数据集所在地址、并整理数据集。这里要注意,训练数据和测试数据都放在image文件夹下,我们需要通过train.csv和test.csv中的图片地址来将他们划分开。
整理数据集思路很简单:将训练数据集和测试数据集划分开,然后将训练数据集中每一个类建立一个文件夹,并把标签对应的图片复制一份到里面。(这里我们用到了验证数据集,和训练数据集操作是一样的)
下面的几个函数就是对上面的描述进行操作,都在注释中。
def read_csv_labels(fname):
"""读取 fname
来给标签字典返回一个文件名。"""
with open(fname, 'r') as f:
lines = f.readlines()[1:]
tokens = [l.rstrip().split(',') for l in lines]
return dict(((name, label) for name, label in tokens))
labels = read_csv_labels(os.path.join(data_dir, 'train.csv'))
def copyfile(filename, target_dir):
"""将文件复制到目标目录。"""
os.makedirs(target_dir, exist_ok=True)
shutil.copy(filename, target_dir)
def reorg_train_valid(data_dir, labels, valid_ratio):
n = collections.Counter(labels.values()).most_common()[-1][1]
n_valid_per_label = max(1, math.floor(n * valid_ratio))
label_count = {}
for train_file in labels:
label = labels[train_file]
fname = os.path.join(data_dir, train_file)
copyfile(
fname,
os.path.join(data_dir, 'train_valid_test', 'train_valid', label))
if label not in label_count or label_count[label] < n_valid_per_label:
copyfile(
fname,
os.path.join(data_dir, 'train_valid_test', 'valid', label))
label_count[label] = label_count.get(label, 0) + 1
else:
copyfile(
fname,
os.path.join(data_dir, 'train_valid_test', 'train', label))
return n_valid_per_label
def reorg_test(data_dir):
test = pd.read_csv(os.path.join(data_dir, 'test.csv'))
for test_file in test['image']:
copyfile(
os.path.join(data_dir, test_file),
os.path.join(data_dir, 'train_valid_test', 'test', 'unknown'))
def reorg_leave_data(data_dir, valid_ratio):
labels = read_csv_labels(os.path.join(data_dir, 'train.csv'))
reorg_train_valid(data_dir, labels, valid_ratio)
reorg_test(data_dir)
batch_size = 128
valid_ratio = 0.1
if not os.path.exists(data_dir + "\\" + "train_valid_test"):
print("start!")
reorg_leave_data(data_dir, valid_ratio)
else:
print("Already exists!")
print('finish!')
3、图像增广:
接下来对图像进行变换,也就是 图像增广: 这里需要说下,图像增广,这里并没有把每张图片变成多张,保存下来。而是每次读入的时候,随机的变换成一张,然后送入模型。从整个模型运行的角度看,实际上就是将数据集变大了,因为每次送入的图片大概率是不一样的(随机变换的)
transform_train = torchvision.transforms.Compose([
torchvision.transforms.RandomResizedCrop(224, scale=(0.08, 1.0),
ratio=(3.0 / 4.0, 4.0 / 3.0)),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ColorJitter(brightness=0.4, contrast=0.4,
saturation=0.4),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
transform_test = torchvision.transforms.Compose([
torchvision.transforms.Resize(256),
torchvision.transforms.CenterCrop(224),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
4、读取数据:
我们前面把数据集划分成训练集、验证集和测试集,并每一个类建立了一个文件夹。接下来我们使用 torchvision 的
ImageFolder
方法,将训练集、验证集和测试集读取进来。
train_ds, train_valid_ds = [
torchvision.datasets.ImageFolder(
os.path.join(data_dir, 'train_valid_test', folder),
transform=transform_train) for folder in ['train', 'train_valid']]
valid_ds, test_ds = [
torchvision.datasets.ImageFolder(
os.path.join(data_dir, 'train_valid_test', folder),
transform=transform_test) for folder in ['valid', 'test']]
train_iter, train_valid_iter = [
torch.utils.data.DataLoader(dataset, batch_size, shuffle=True,
drop_last=True)
for dataset in (train_ds, train_valid_ds)]
valid_iter = torch.utils.data.DataLoader(valid_ds, batch_size, shuffle=False,
drop_last=True)
test_iter = torch.utils.data.DataLoader(test_ds, batch_size, shuffle=False,
drop_last=False)
到目前为止,数据的处理总算完事了,接下来看看模型是如何构造的。
5、模型构造:
我为了练习 微调(迁移学习的一种)的做法,选用了resnet50预训练模型,作为这次比赛的模型。这么做也有点道理,因为resnet系列的预训练模型都是在ImageNet数据集上训练的,而ImageNet数据集,我们都知道100万的图片,分类为1000类,有树叶的分类,因此,可以使用迁移学习的方法做。
也可以不用微调,直接把resnet50重新训练一遍,应该效果会更好吧。
下面看看微调的具体做法:
def get_net(devices):
finetune_net = nn.Sequential()
finetune_net.features = torchvision.models.resnet50(pretrained=True)
finetune_net.output_new = nn.Sequential(nn.Linear(1000, 512), nn.ReLU(),
nn.Linear(512, 256), nn.ReLU(),
nn.Linear(256, 176))
finetune_net = finetune_net.to(devices[0])
for param in finetune_net.features.parameters():
param.requires_grad = False
return finetune_net
6、计算损失:
loss = nn.CrossEntropyLoss(reduction='none')
def evaluate_loss(data_iter, net, devices):
l_sum, n = 0.0, 0
for features, labels in data_iter:
features, labels = features.to(devices[0]), labels.to(devices[0])
outputs = net(features)
l = loss(outputs, labels)
l_sum += l.sum()
n += labels.numel()
return l_sum / n
7、模型训练:
def train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period, lr_decay):
"""
wd:权衰量,用于防止过拟合
lr_period:每隔几个epoch降低学习率
lr_decay:降低学习率的比例
"""
net = nn.DataParallel(net, device_ids=devices).to(devices[0])
trainer = torch.optim.SGD(
(param for param in net.parameters() if param.requires_grad), lr=lr,
momentum=0.9, weight_decay=wd)
scheduler = torch.optim.lr_scheduler.StepLR(trainer, lr_period, lr_decay)
num_batches, timer = len(train_iter), d2l.Timer()
legend = ['train loss']
if valid_iter is not None:
legend.append('valid loss')
animator = d2l.Animator(xlabel='epoch', xlim=[1, num_epochs],
legend=legend)
for epoch in range(num_epochs):
metric = d2l.Accumulator(2)
for i, (features, labels) in enumerate(train_iter):
timer.start()
features, labels = features.to(devices[0]), labels.to(devices[0])
trainer.zero_grad()
output = net(features)
l = loss(output, labels).sum()
l.backward()
trainer.step()
metric.add(l, labels.shape[0])
timer.stop()
if (i + 1) % (num_batches // 5) == 0 or i == num_batches - 1:
animator.add(epoch + (i + 1) / num_batches,
(metric[0] / metric[1], None))
measures = f'train loss {metric[0] / metric[1]:.3f}'
if valid_iter is not None:
valid_loss = evaluate_loss(valid_iter, net, devices)
animator.add(epoch + 1, (None, valid_loss.cpu().detach()))
scheduler.step()
if valid_iter is not None:
measures += f', valid loss {valid_loss:.3f}'
print(measures + f'\n{metric[1] * num_epochs / timer.sum():.1f}'
f' examples/sec on {str(devices)}')
devices, num_epochs, lr, wd = d2l.try_all_gpus(), 10, 1e-4, 1e-4
lr_period, lr_decay, net = 2, 0.9, get_net(devices)
train(net, train_iter, valid_iter, num_epochs, lr, wd, devices, lr_period, lr_decay)
训练结果:
8、模型推理:
对测试集进行分类并提交结果。
devices, num_epochs, lr, wd = d2l.try_all_gpus(), 20, 2e-4, 5e-4
lr_period, lr_decay= 4, 0.9
net, preds = get_net(devices), []
train(net, train_valid_iter, None, num_epochs, lr, wd, devices, lr_period, lr_decay)
test = pd.read_csv(os.path.join(data_dir, 'test.csv'))
for X, _ in test_iter:
y_hat = net(X.to(devices[0]))
preds.extend(y_hat.argmax(dim=1).type(torch.int32).cpu().numpy())
sorted_ids = test['image']
df = pd.DataFrame({'image': sorted_ids, 'label': preds})
df['label'] = df['label'].apply(lambda x: train_valid_ds.classes[x])
df.to_csv(r'..\data\classify-leaves\submission.csv', index=False)
结果:
9、上传预测结果到kaggle:
下面是我的得分,分不高,以学习为主。
Original: https://blog.csdn.net/weixin_45901519/article/details/119458683
Author: Ma Sizhou
Title: kaggle比赛:Classify Leaves(使用resnet50预训练模型进行:图片树叶分类)
原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/665244/
转载文章受原作者版权保护。转载请注明原作者出处!