【深度学习8】Pytorch使用及使用小案例

文章目录

*
八、PyTorch

+ 8.1 相关定义
+
* 1 张量Tensor
* 2 Variable
+ 8.2 激活函数
+ 8.3 损失函数
+
* 1)均方误差损失函数
* 2)交叉熵损失函数
+ 8.4 PyTorch实战
+
* 1 MNIST手写数字识别
* 2 Cifar10分类

八、PyTorch

8.1 相关定义

1 张量Tensor

优点:可以使用GPU加速

1)Tensor和numpy之间的转化

import torch
import numpy as np
np_data = np.arange(8).reshape((2,4))
torch_data = torch.from_numpy(np_data)
print(np_data)
print(torch_data)
np_data2 = torch_data.numpy()
print(np_data2)

2)矩阵运算

import torch
import numpy as np
np_data = np.array([[1,2],[3,5]])
torch_data = torch.from_numpy(np_data)
print(np_data)
print(np_data.dot(np_data))
print(torch_data.mm(torch_data))

2 Variable

Tensor是PyTorch中的基础组件,但是构建神经网络还远远不够,我们需要能够构建计算图的Tensor,也就是Variable(简单理解就是Variable是对Tensor的一种封装)。
其操作与Tensor是一样的,但是每个Variable都包含了三个属性

  • .dataVariable中的Tensor本身
  • .grad对应Tensor的梯度
  • .grad_fn创建这个Variable的Function的引用(该引用可用于回溯整个创建链路,如果是用户自己创建Variable,则其grad_fn为None)

【深度学习8】Pytorch使用及使用小案例
from torch.autograd import Variable
import torch
x_tensor = torch.randn(10, 5)

x = Variable(x_tensor, requires_grad=True)

print(x.data)
print(x.grad)
print(x.grad_fn)

8.2 激活函数

import torch
from torch.autograd import Variable
import matplotlib.pyplot as plt
tensor = torch.linspace(-6,6,200)
tensor = Variable(tensor)
np_data = tensor.numpy()

y_relu = torch.relu(tensor).data.numpy()
y_sigmoid =torch.sigmoid(tensor).data.numpy()
y_tanh = torch.tanh(tensor).data.numpy()
plt.figure(1, figsize=(8, 6))
plt.subplot(221)
plt.plot(np_data, y_relu, c='red', label='relu')
plt.legend(loc='best')
plt.subplot(222)
plt.plot(np_data, y_sigmoid, c='red', label='sigmoid')
plt.legend(loc='best')
plt.subplot(223)
plt.plot(np_data, y_tanh, c='red', label='tanh')
plt.legend(loc='best')
plt.show()

8.3 损失函数

1)均方误差损失函数

PyTorch中均方差损失函数被封装成MSELoss函数,其调用方法如下:

torch.nn.MSELoss(size_average=None, reduce=None,reduction='mean')

调用方法中的参数及说明具体如下:
size_average(bool,optional):基本弃用(参见reduction)。默认情况下,损失是批次(batch)中每个损失元素的平均值。请注意,对于某些损失,每个样本均有多个元素。如果将字段 size_average设置为False,则需要将每个batch的损失相加。当 reduce设置为False时忽略。默认值为True。

reduce(bool,optional):基本弃用(参见reduction)。默认情况下,根据size_average,对每个batch中结果的损失进行平均或求和。当reduce为False时,返回batch中每个元素的损失并忽略size_average。默认值为True。

reduction(string,optional):输出元素包含3种操作方式,即none、mean和sum。’none’:不做处理。’mean’:输出的总和除以输出中元素的数量。’sum’:输出的和。注意: size_average和reduce基本已被弃用,而且指定这两个args中的任何一个都将覆盖reduce。默认值为mean。在PyTorch 0.4之后,参数size_average和reduce已被舍弃。

2)交叉熵损失函数

PyTorch中的交叉熵损失函数将nn.LogSoftmax()和nn.NLLLoss()合并在一个类中,函数名为CrossEntropyLoss()。CrossEntropyLoss是多分类任务中常用的损失函数,在PyTorch中其调用方法如下:

torch.nn.CrossEntropyLoss(weight=None, size_average=None,ignore_index=-100, reduce=None, reduction='mean')

调用方法中的参数及其说明具体如下。
weight(Tensor,optional):多分类任务中,手动给出每个类别权重的缩放量。如果给出,则其是一个大小等于类别个数的张量。
size_average(bool,optional):已基本弃用(参见reduction)。默认情况下,损失是batch中每个损失元素的平均值。请注意,对于某些损失,每个样本都包含了多个元素。如果将字段size_average设置为False,则将每个小批量的损失相加。当reduce为False时则忽略。默认值为True。
ignore_index(int,optional):指定被忽略且不对输入梯度做贡献的目标值。当size_average为True时,损失则是未被忽略目标的平均。

reduce(bool,optional):已基本弃用(参见reduction)。默认情况下,根据size_average,对每个batch中结果的损失进行平均或求和。当reduce为False时,返回batch中每个元素的损失并忽略size_average。默认值为True。
reduction(string,optional):输出元素有3种操作方式,即none、mean和sum。

  • ‘none’:不做处理。
  • ‘mean’:输出的总和除以输出中的元素数量。
  • ‘sum’:输出的和。

注意:size_average和reduce正在被弃用,而且指定这两个args中的任何一个都将覆盖reduce。默认值为mean。

8.4 PyTorch实战

1 MNIST手写数字识别

PyTorch实战之MNIST分类第一个案例我们使用MNIST数据集来进行手写数字的识别。

1)数据准备

import torch
from torch.utils.data import DataLoader
import torchvision.datasets as dsets
import torchvision.transforms as transforms
batch_size = 100

train_dataset = dsets.MNIST(root = '/pymnist',
                           train = True,
                           transform = transforms.ToTensor(),
                           download = True)
test_dataset = dsets.MNIST(root = '/pymnist',
                           train = False,
                           transform = transforms.ToTensor(),
                           download = True)

train_loader = torch.utils.data.DataLoader(dataset = train_dataset,
                                           batch_size = batch_size,
                                           shuffle = True)
test_loader = torch.utils.data.DataLoader(dataset = test_dataset,
                                          batch_size = batch_size,
                                          shuffle = True)

2)创建神经网络模型

import torch.nn as nn
import torch

input_size = 784
hidden_size = 500
num_classes = 10

class Neural_net(nn.Module):

    def __init__(self, input_num,hidden_size, out_put):
        super(Neural_net, self).__init__()
        self.layer1 = nn.Linear(input_num, hidden_size)
        self.layer2 = nn.Linear(hidden_size, out_put)

    def forward(self, x):
        out = self.layer1(x)
        out = torch.relu(out)
        out = self.layer2(out)
        return out

net = Neural_net(input_size, hidden_size, num_classes)
print(net)

3)训练


from torch.autograd import Variable
import numpy as np
learning_rate = 1e-1
num_epoches = 5
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr = learning_rate)
for epoch in range(num_epoches):
    print('current epoch = %d' % epoch)
    for i, (images, labels) in enumerate(train_loader):
        images = Variable(images.view(-1, 28 * 28))
        labels = Variable(labels)

        outputs = net(images)
        loss = criterion(outputs, labels)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if i % 100 == 0:
            print('current loss = %.5f' % loss.item())

print('finished training')

4)评估


total = 0
correct = 0
for images, labels in test_loader:
    images = Variable(images.view(-1, 28 * 28))
    outputs = net(images)
    print(outputs)
    _, predicts = torch.max(outputs.data, 1)
    total += labels.size(0)
    correct += (predicts == labels).sum()
print('Accuracy = %.2f' % (100 * correct / total))

全部代码

import torch
from torch.utils.data import DataLoader
import torchvision.datasets as dsets
import torchvision.transforms as transforms
batch_size = 100

train_dataset = dsets.MNIST(root = '/pymnist',
                           train = True,
                           transform = transforms.ToTensor(),
                           download = True)
test_dataset = dsets.MNIST(root = '/pymnist',
                           train = False,
                           transform = transforms.ToTensor(),
                           download = True)

train_loader = torch.utils.data.DataLoader(dataset = train_dataset,
                                           batch_size = batch_size,
                                           shuffle = True)
test_loader = torch.utils.data.DataLoader(dataset = test_dataset,
                                          batch_size = batch_size,
                                          shuffle = True)

import torch.nn as nn
import torch

input_size = 784
hidden_size = 500
num_classes = 10

class Neural_net(nn.Module):

    def __init__(self, input_num,hidden_size, out_put):
        super(Neural_net, self).__init__()
        self.layer1 = nn.Linear(input_num, hidden_size)
        self.layer2 = nn.Linear(hidden_size, out_put)

    def forward(self, x):
        out = self.layer1(x)
        out = torch.relu(out)
        out = self.layer2(out)
        return out

net = Neural_net(input_size, hidden_size, num_classes)
print(net)

from torch.autograd import Variable
import numpy as np
learning_rate = 1e-1
num_epoches = 5
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr = learning_rate)
for epoch in range(num_epoches):
    print('current epoch = %d' % epoch)
    for i, (images, labels) in enumerate(train_loader):
        images = Variable(images.view(-1, 28 * 28))
        labels = Variable(labels)

        outputs = net(images)
        loss = criterion(outputs, labels)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        if i % 100 == 0:
            print('current loss = %.5f' % loss.item())

print('finished training')

total = 0
correct = 0
for images, labels in test_loader:
    images = Variable(images.view(-1, 28 * 28))
    outputs = net(images)
    print(outputs)
    _, predicts = torch.max(outputs.data, 1)
    total += labels.size(0)
    correct += (predicts == labels).sum()
print('Accuracy = %.2f' % (100 * correct / total))

2 Cifar10分类

1)数据准备

import torch
from torch.utils.data import DataLoader
import torchvision.datasets as dsets
import torchvision.transforms as transforms
batch_size = 100

train_dataset = dsets.CIFAR10(root = '/ml/pycifar',
                           train = True,
                           transform = transforms.ToTensor(),
                           download = True)
test_dataset = dsets.CIFAR10(root = '/ml/pycifar',
                           train = False,
                           transform = transforms.ToTensor(),
                           download = True)

train_loader = torch.utils.data.DataLoader(dataset = train_dataset,
                                           batch_size = batch_size,
                                           shuffle = True)
test_loader = torch.utils.data.DataLoader(dataset = test_dataset,
                                          batch_size = batch_size,
                                          shuffle = True)

2)创建神经网络模型

from torch.autograd import Variable
import torch.nn as nn
import torch

input_size = 784
hidden_size = 100
hidden_size2 = 200
num_classes = 10
num_epochs = 5
batch_size = 100
learning_rate = 0.001

class Net(nn.Module):
    def __init__(self,input_size,hidden_size,hidden_size2,num_classes):
        super(Net,self).__init__()
        self.layer1 = nn.Linear(input_size,hidden_size)
        self.layer2 = nn.Linear(hidden_size,hidden_size2)
        self.layer3 = nn.Linear(hidden_size2,num_classes)

    def forward(self,x):
        out = torch.relu(self.layer1(x))
        out = torch.relu(self.layer2(out))
        out = self.layer3(out)
        return out

net = Net(input_size,hidden_size,hidden_size2,num_classes)
print(net)

3)训练


from torch.autograd import Variable
import numpy as np
learning_rate = 1e-1
num_epoches = 5
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr =
learning_rate)
for epoch in range(num_epoches):
    print('current epoch = %d' % epoch)
    for i, (images, labels) in enumerate(train_loader):
        images = Variable(images.view(-1, 28 * 28))
        labels = Variable(labels)
        outputs = net(images)
        loss = criterion(outputs, labels)
        optimizer.zero_grad()
        loss.backward()
        optimizer.step()
        if i % 100 == 0:
            print('current loss = %.5f' % loss.item())
print('finished training')

4)评估


correct = 0
for images, labels in test_loader:
    images = Variable(images.view(images.size(0), -1))
    outputs = net(images)
    _, predicts = torch.max(outputs.data, 1)
    total += labels.size(0)
    correct += (predicts == labels).sum()
print('Accuracy = %.2f' % (100 * correct / total))

最后我跑出来的结果是MINIST数据集的准确度为97.41%

而另一个复杂一些的彩色Cifar10数据集效果却不好,准确度只有48.35%。所以,浅层的神经网络keyi解决一部分简单的问题。深层的还是得看卷积神经网络。

Original: https://blog.csdn.net/NewbieJ_/article/details/124707797
Author: JunLal
Title: 【深度学习8】Pytorch使用及使用小案例

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/759314/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球