[PyTorch]利用torch.nn实现前馈神经网络

文章目录

前馈神经网络

前馈神经网络,又称作深度前馈网络、多层感知机,信息流经过中间的函数计算, 最终达到输出,被称为”前向”。模型的输出与模型本身没有反馈连接。
前馈神经网络中的隐含层需要使用非线性激活,如果不使用非线性激活函数,那么每一层都是线性的,导致多层的线性组合仍然是线性的,最终的输出也是线性拟合,无法泛化非线性的问题。

实验要求

  1. 使用torch.nn在Fashion-MNIST数据集完成前馈神经网络,绘制训练集和测试集的loss曲线(使用Fashion-MNIST数据集)
  2. 使用三种不同的激活函数,对比实验结果
  3. 使用不同的隐藏层层数和隐藏单元个数,对比实验结果
  4. 在上面实验中分别手动实现和利用torch.nn实现dropout,探究不同丢弃率对结果的影响
  5. 分别手动实现和利用torch.nn实现L2正则化,探究不同惩罚项权重对结果的影响
  6. 选择上述实验中效果最好的模型,采用10折交叉验证评估实验结果

一、利用torch.nn实现前馈神经网络

导入包和加载Fashion-MNIST数据集可参考之前的博客,下面直接开始构建模型的部分


num_inputs = 784
num_outputs = 10
num_hiddens = 256

class FlattenLayer(torch.nn.Module):
    def __init__(self):
        super(FlattenLayer, self).__init__()

    def forward(self, x):
        return x.view(x.shape[0], -1)

class SoftmaxLayer(torch.nn.Module):
    def __init__(self):
        super(SoftmaxLayer, self).__init__()

    def forward(self, X):
        X_exp = X.exp()
        partition = X_exp.sum(dim=1, keepdim=True)
        return X_exp / partition

net = torch.nn.Sequential(
    FlattenLayer(),
    torch.nn.Linear(num_inputs, num_hiddens),

    torch.nn.ReLU(),

    torch.nn.Linear(num_hiddens, num_outputs),
    SoftmaxLayer(),
)

初始化模型参数


for params in net.parameters():
    torch.nn.init.normal_(params, mean=0, std=0.01)

损失函数与优化器


num_epochs = 10
lr = 0.1
loss = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr)

评估函数


def evaluate(data_iter, net):
    right_sum, n, loss_sum = 0.0, 0, 0.0
    for x, y in data_iter:
        y_ = net(x)
        l = loss(y_, y).sum()
        right_sum += (y_.argmax(dim=1) == y).float().sum().item()
        n += y.shape[0]
        loss_sum += l.item()
    return right_sum / n, loss_sum / n

模型训练与评估

train_l_ = []
test_l_ = []
train_acc_ = []
test_acc_ = []

def train(net, loss, num_epochs, optimizer, train_iter, test_iter):
    for epoch in range(num_epochs):
        train_r_num, train_l, n = 0.0, 0.0, 0
        for X, y in tqdm(train_iter):
            y_hat = net(X)
            l = loss(y_hat, y)
            l.backward()
            optimizer.step()
            optimizer.zero_grad()
            train_r_num += (y_hat.argmax(dim=1) == y).sum().item()
            train_l += l.item()
            n += y.shape[0]
        test_acc, test_l = evaluate(test_iter, net)
        train_l_.append(train_l / n)
        train_acc_.append(train_r_num / n)
        test_l_.append(test_l)
        test_acc_.append(test_acc)
        print('epoch %d, train loss %.4f, train acc %.3f' % (epoch + 1, train_l / n, train_r_num / n))
        print('test loss %.4f, test acc %.3f' % (test_l, test_acc))

train(net, loss, num_epochs, optimizer, train_iter, test_iter)

绘制loss曲线以及准确率曲线


def draw_(x, train_Y, test_Y, ylabel):
    plt.plot(x, train_Y, label='train_' + ylabel, linewidth=1.5)
    plt.plot(x, test_Y, label='test_' + ylabel, linewidth=1.5)
    plt.xlabel('epoch')
    plt.ylabel(ylabel)
    plt.legend()
    plt.show()

x = np.linspace(0, len(train_l_), len(train_l_))
draw_(x, train_l_, test_l_, 'loss')
draw_(x, train_acc_, test_acc_, 'accuracy')

二、对比三种不同的激活函数的实验结果

1、ReLu激活函数

训练结果

[PyTorch]利用torch.nn实现前馈神经网络

loss曲线acc曲线

[PyTorch]利用torch.nn实现前馈神经网络

2、Softplus激活函数

训练结果

[PyTorch]利用torch.nn实现前馈神经网络

loss曲线acc曲线

[PyTorch]利用torch.nn实现前馈神经网络

3、Tanh激活函数

训练结果

[PyTorch]利用torch.nn实现前馈神经网络

loss曲线acc曲线

[PyTorch]利用torch.nn实现前馈神经网络

; 三、使用不同的隐藏层层数和隐藏单元个数,对比实验结果

3.1 隐藏单元个数

通过修改num_hiddens来调节隐藏单元个数。以下的实验隐藏层1层,lr=0.2,epoch=5,实验结果如下:

[PyTorch]利用torch.nn实现前馈神经网络
实验结果最好是跑多次计算平均值,由于时间关系本文实验的每种情况都只跑了一次。从表格结果来看,在本任务上,隐藏层神经元越多,相同的epoch下实验效果越好。

; 3.2 隐藏层层数

每层隐藏层神经元为64,实验结果如下:

[PyTorch]利用torch.nn实现前馈神经网络
隐藏层越多,模型越复杂,应该是收敛速度变慢,在相同的epoch下实验结果会变差。

四、利用torch.nn实现dropout

手动实现

利用torch.nn实现

drop_prob1 = 0.2

net = torch.nn.Sequential(
    FlattenLayer(),
    torch.nn.Linear(num_inputs, num_hiddens),
    torch.nn.ReLU(),
    torch.nn.Dropout(drop_prob1),
    torch.nn.Linear(num_hiddens, num_outputs),
    SoftmaxLayer(),
)

实验结果

[PyTorch]利用torch.nn实现前馈神经网络
在这里dropout使模型的效果变差了一点,因为dropout是防止过拟合的一项措施,本任务的模型简单并且没有出现过拟合,所以dropout对本模型没有提升效果

五、利用torch.nn实现L2正则化

使用torch.optim的weight_decay参数实现L2范数正则化(也叫做权重衰减weight_decay)


optimizer_w = torch.optim.SGD(net.parameters(), lr=lr, weight_decay=1e-2)

实验结果

[PyTorch]利用torch.nn实现前馈神经网络
实验结果也变差了一点,本文构建的模型不够复杂

六、k折交叉验证

k折交叉验证:将数据集分层划分为K个大小相似的互斥子集,每次用K-1个子集的并集作为训练集,剩下的子集作为测试集,最终返回k个测试结果的均值
获取第i折的训练集和验证集

def get_kfold_data(k, i, data):

    fold_size = data.targets.shape[0] // k
    valid_data = deepcopy(data)
    train_data = deepcopy(data)
    start_ = i*fold_size
    if i != k-1:
        end_ = (i+1)*fold_size
        valid_data.data = valid_data.data[start_:end_]
        valid_data.targets = valid_data.targets[start_:end_]
        train_data.data = torch.cat((train_data.data[0:start_], train_data.data[end_:]), dim=0)
        train_data.targets = torch.cat((train_data.targets[0:start_], train_data.targets[end_:]), dim=0)
    else:
        valid_data.data, valid_data.targets = valid_data.data[start_:], valid_data.targets[start_:]
        train_data.data, train_data.targets = train_data.data[0:start_], train_data.targets[0:start_]
    return train_data, valid_data

训练


def k_train(net, train_data, valid_data):
    train_iter = Data.DataLoader(
        dataset=train_data,
        batch_size=batch_size,
        shuffle=True,
        num_workers=0,
    )
    valid_iter = Data.DataLoader(
        dataset=valid_data,
        batch_size=batch_size,
        shuffle=False,
        num_workers=0,
    )

    train_acc, train_l = 0.0, 0.0
    valid_acc, valid_l = 0.0, 0.0

    optimizer = torch.optim.SGD(net.parameters(), lr=lr)

    for epoch in range(num_epochs):
        train_r_num, train_l_, n = 0.0, 0.0, 0
        for X, y in train_iter:
            y_hat = net(X)
            l = loss(y_hat, y)
            l.backward()

            optimizer.step()

            optimizer.zero_grad()
            train_r_num += (y_hat.argmax(dim=1) == y).sum().item()
            train_l_ += l.item()
            n += y.shape[0]
        v_acc, v_l = evaluate(valid_iter, net)
        valid_acc += v_acc
        valid_l += v_l
        train_acc += train_r_num / n
        train_l += train_l_ / n
    return train_l/num_epochs, valid_l/num_epochs, train_acc/num_epochs, valid_acc/num_epochs

def kfold_train(k):
    train_loss_sum, valid_loss_sum = 0, 0
    train_acc_sum, valid_acc_sum = 0, 0
    for i in range(k):
        print('第', i+1, '折验证')
        train_data, valid_data = get_kfold_data(k, i, mnist_train)
        net_ = torch.nn.Sequential(
            FlattenLayer(),
            torch.nn.Linear(num_inputs, num_hiddens),
            torch.nn.ReLU(),
            torch.nn.Linear(num_hiddens, num_outputs),
            SoftmaxLayer(),
        )
        for params in net_.parameters():
            torch.nn.init.normal_(params, mean=0, std=0.01)

        train_loss, val_loss, train_acc, val_acc = k_train(net_, train_data, valid_data)
        print('train loss %.4f, val loss %.4f, train acc %.3f, val acc %.3f' % (train_loss, val_loss, train_acc, val_acc))

        train_loss_sum += train_loss
        valid_loss_sum += val_loss
        train_acc_sum += train_acc
        valid_acc_sum += val_acc
    print('\n最终k折交叉验证结果:')
    print('ave train loss: %.4f, ave train acc: %.3f' % (train_loss_sum/k, train_acc_sum/k))
    print('ave valid loss: %.4f, ave valid acc: %.3f' % (valid_loss_sum/k, valid_acc_sum/k))

kfold_train(10)

验证结果

第 1 折验证
train loss 0.0069, val loss 0.0069, train acc 0.734, val acc 0.768
第 2 折验证
train loss 0.0069, val loss 0.0069, train acc 0.727, val acc 0.755
第 3 折验证
train loss 0.0069, val loss 0.0069, train acc 0.727, val acc 0.763
第 4 折验证
train loss 0.0069, val loss 0.0069, train acc 0.731, val acc 0.760
第 5 折验证
train loss 0.0069, val loss 0.0069, train acc 0.729, val acc 0.761
第 6 折验证
train loss 0.0069, val loss 0.0068, train acc 0.729, val acc 0.774
第 7 折验证
train loss 0.0069, val loss 0.0069, train acc 0.726, val acc 0.770
第 8 折验证
train loss 0.0069, val loss 0.0069, train acc 0.730, val acc 0.764
第 9 折验证
train loss 0.0069, val loss 0.0069, train acc 0.735, val acc 0.768
第 10 折验证
train loss 0.0071, val loss 0.0071, train acc 0.671, val acc 0.711

最终k折交叉验证结果:
ave train loss: 0.0069, ave train acc: 0.724
ave valid loss: 0.0069, ave valid acc: 0.759

Process finished with exit code 0

Original: https://blog.csdn.net/cumina/article/details/119328314
Author: 番茄牛腩煲
Title: [PyTorch]利用torch.nn实现前馈神经网络

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/709286/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球