《实战》tensorflow搭建神经网络完成图像分类任务

《实战》tensorflow搭建神经网络完成图像分类任务

1.模型概述

本次任务主要是通过构建一个双隐藏层的单元的神经网络,完成对于Mnist图像数据集的分类任务。

Mnist的数据集的数据维度为28 _28_1的一个图像数据,将数据传入到一个基础的神经网络中,通过前向和反向传播,修正模型的参数,将最终结果映射到一个一个softmax的函数中,输出对当前10个类别的预测的概率值。

《实战》tensorflow搭建神经网络完成图像分类任务
需要将输入的数据的维度拉长为784列的数据集,首先将输入的数据,通过权重参数的映射计算,得到中间层蓝色隐藏层的结果。再将其映射成10个类别,将最终的结果,传入softmax函数。
《实战》tensorflow搭建神经网络完成图像分类任务

; 2 基本参数设置

通过tensorflow自带的数据集,进行数据的下载与获取。

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("data/", one_hot=True)

初步构建模型的参数,设置类别数量为10,输入的数据的维度为784,中间隐藏层数量为50,迭代次数为10000,每次输出的BatchSize 为100

numClasses = 10
inputSize = 784
numHiddenUnits = 50
trainingIterations = 10000
batchSize = 100

构建完事基本的初始参数,用tf.placeholder函数构建出相应的输入的x以及与其对应的y标签。

其中x,y的行向量设置为None,根据输入进来的batchsize的值定义

X = tf.placeholder(tf.float32, shape = [None, inputSize])
y = tf.placeholder(tf.float32, shape = [None, numClasses])

设定完事x和y,就可以来设置相应的中间层的参数的值,w与b的值都是随机值就可以,b的维度等于=输入*w

W1 = tf.Variable(tf.truncated_normal([inputSize, numHiddenUnits], stddev=0.1))
B1 = tf.Variable(tf.constant(0.1), [numHiddenUnits])
W2 = tf.Variable(tf.truncated_normal([numHiddenUnits, numClasses], stddev=0.1))
B2 = tf.Variable(tf.constant(0.1), [numClasses])
  1. 基本模型构建

3.1 神经网络模型

在建立基本参数设置后,即可建立基本网络模型架构。

[En]

After the basic parameter settings are built, the basic network model architecture can be built.

首先输入数据,进入第一个权重层的计算,得到的结果进行激活函数relu的计算,得到第一个隐藏层的结果。

将第一个隐藏层的结果,再进入第二个权重层的计算,再将得到的结果进行激活函数relu的计算,得到第二个隐藏层的结果。

hiddenLayerOutput = tf.matmul(X, W1) + B1
hiddenLayerOutput = tf.nn.relu(hiddenLayerOutput)
finalOutput = tf.matmul(hiddenLayerOutput, W2) + B2
finalOutput = tf.nn.relu(finalOutput)

3.2 优化器与损失函数

构建出损失函数,tensorflow中的损失函数主要是通过tf.nn模块下的交叉熵函数完成对于分类任务误差值的计算。

其中lables为真实值,logits代表预测值。

将损失函数赋给梯度下降型优化器,学习率为0.1,计算损失值最小。

[En]

The loss function is given to the optimizer with gradient decline, and the learning rate is 0.1, in which the loss value is calculated to be the minimum.

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y, logits = finalOutput))
opt = tf.train.GradientDescentOptimizer(learning_rate = .1).minimize(loss)

将模型的预测结果与实际结果进行计算,得出当前模型的精度。

[En]

The prediction results of the model and the real results are calculated to obtain the accuracy of the current model.

correct_prediction = tf.equal(tf.argmax(finalOutput,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

3.3 迭代优化

完成相应的参数设施,与模型的构建。选择相应的每次计算的batchsize的值,就进行迭代计算得到,当前模型的精度。

sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)

for i in range(trainingIterations):
    batch = mnist.train.next_batch(batchSize)
    batchInput = batch[0]
    batchLabels = batch[1]
    _, trainingLoss = sess.run([opt, loss], feed_dict={X: batchInput, y: batchLabels})
    if i%1000 == 0:
        trainAccuracy = accuracy.eval(session=sess, feed_dict={X: batchInput, y: batchLabels})
        print ("step %d, training accuracy %g"%(i, trainAccuracy))

step 0, training accuracy 0.13
step 1000, training accuracy 0.79
step 2000, training accuracy 0.83
step 3000, training accuracy 0.88
step 4000, training accuracy 0.91
step 5000, training accuracy 0.87
step 6000, training accuracy 0.89
step 7000, training accuracy 0.84
step 8000, training accuracy 0.89
step 9000, training accuracy 1

  1. 双层的神经网络

同样,我们也可以在上面的网络中增加一个相应的中间层来完成当前数据特征的提取。

[En]

Similarly, we can also add a corresponding middle layer to the above network to complete the extraction of the current data features.

整体的代码架构不变,在原本的基础上再添加一个w和b。

一般来说,神经网络的层数越多,从数据的特征提取中获得的信息量就越多,但一般情况下,8层以下的效果更好。

[En]

In general, the more layers of the neural network, the more information can be obtained from the feature extraction of the data, but generally the effect of less than 8 layers is better.

完成的代码如下:

numHiddenUnitsLayer2 = 100
trainingIterations = 10000

X = tf.placeholder(tf.float32, shape = [None, inputSize])
y = tf.placeholder(tf.float32, shape = [None, numClasses])

W1 = tf.Variable(tf.random_normal([inputSize, numHiddenUnits], stddev=0.1))
B1 = tf.Variable(tf.constant(0.1), [numHiddenUnits])
W2 = tf.Variable(tf.random_normal([numHiddenUnits, numHiddenUnitsLayer2], stddev=0.1))
B2 = tf.Variable(tf.constant(0.1), [numHiddenUnitsLayer2])
W3 = tf.Variable(tf.random_normal([numHiddenUnitsLayer2, numClasses], stddev=0.1))
B3 = tf.Variable(tf.constant(0.1), [numClasses])

hiddenLayerOutput = tf.matmul(X, W1) + B1
hiddenLayerOutput = tf.nn.relu(hiddenLayerOutput)
hiddenLayer2Output = tf.matmul(hiddenLayerOutput, W2) + B2
hiddenLayer2Output = tf.nn.relu(hiddenLayer2Output)
finalOutput = tf.matmul(hiddenLayer2Output, W3) + B3

loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y, logits = finalOutput))
opt = tf.train.GradientDescentOptimizer(learning_rate = .1).minimize(loss)

correct_prediction = tf.equal(tf.argmax(finalOutput,1), tf.argmax(y,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

sess = tf.Session()
init = tf.global_variables_initializer()
sess.run(init)

for i in range(trainingIterations):
    batch = mnist.train.next_batch(batchSize)
    batchInput = batch[0]
    batchLabels = batch[1]
    _, trainingLoss = sess.run([opt, loss], feed_dict={X: batchInput, y: batchLabels})
    if i%1000 == 0:
        train_accuracy = accuracy.eval(session=sess, feed_dict={X: batchInput, y: batchLabels})
        print ("step %d, training accuracy %g"%(i, train_accuracy))

testInputs = mnist.test.images
testLabels = mnist.test.labels
acc = accuracy.eval(session=sess, feed_dict = {X: testInputs, y: testLabels})
print("testing accuracy: {}".format(acc))

step 0, training accuracy 0.1
step 1000, training accuracy 0.97
step 2000, training accuracy 0.98
step 3000, training accuracy 1
step 4000, training accuracy 0.99
step 5000, training accuracy 1
step 6000, training accuracy 0.99
step 7000, training accuracy 1
step 8000, training accuracy 0.99
step 9000, training accuracy 1
testing accuracy: 0.9700999855995178

Original: https://blog.csdn.net/qq_44951759/article/details/124387350
Author: 驭风少年君
Title: 《实战》tensorflow搭建神经网络完成图像分类任务

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/496973/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球