一般神经网络(DNN)反向传播过程

DNN反向传播过程

多元函数微分

损失函数都是标量函数,它使用范数损失将向量转换为标量。计算损失函数在第L层输入的导数是一种标量对向量的求导。实际上不论是几维向量,都可以视为一列多元函数的自变量数组。
例如,m × n m\times n m ×n维度的矩阵{ W i j } {W_{ij}}{W ij ​}可以转化为一列多元函数的自变量数组:
{ W i j } → ( W 11 , W 12 . . . W n m ) {W_{ij}}\rightarrow(W_{11},W_{12}…W_{nm}){W ij ​}→(W 11 ​,W 12 ​…W nm ​)
那么关于{ W i j } {W_{ij}}{W ij ​}的标量函数可以视作关于( W 11 , W 12 . . . W n m ) (W_{11},W_{12}…W_{nm})(W 11 ​,W 12 ​…W nm ​)的多元函数。多元函数的梯度就是标量函数对矩阵求导的结果。还记得多元函数的梯度是这样省的:
∂ f ∂ x → = ( ∂ f ∂ x 1 , ∂ f ∂ x 2 . . . ∂ f ∂ x n ) \frac{\partial f}{\partial \overrightarrow{x}}=(\frac{\partial f}{\partial x_{1}}, \frac{\partial f}{\partial x_{2}}…\frac{\partial f}{\partial x_{n}})∂x ∂f ​=(∂x 1 ​∂f ​,∂x 2 ​∂f ​…∂x n ​∂f ​)

向量对向量求导

向量函数可以视作多个标量多元函数组成的向量,例如有将向量B映射为A的向量函数。
A = G ( B ) w h e r e A ∈ R N × 1 , B ∈ R M × 1 A=G(B)\ where\ A\in R^{N\times1},B\in R^{M\times1}A =G (B )w h ere A ∈R N ×1 ,B ∈R M ×1

如果我们将向量A视作多个标量多元函数组成的向量,那么求导就方便多了。
A = ( a 1 ( b 1 , b 2 , . . . b m ) , a 2 ( b 1 , b 2 , . . . b m ) , . . . ) ∂ A ∂ B = ( ∂ a 1 ∂ B , ∂ a 2 ∂ B , . . . ) = ( ∂ a 1 ∂ b 1 . . . ∂ a 1 ∂ b m ∂ a 2 ∂ b 1 . . . ∂ a 2 ∂ b m . . . . . . . . . ∂ a n ∂ b 1 . . . ∂ a n ∂ b m ) \begin{aligned} A&=(a_{1}(b_{1},b_{2},…b_{m}),a_{2}(b_{1},b_{2},…b_{m}),…)\ \frac{\partial A}{\partial B}&=(\frac{\partial a_{1}}{\partial B},\frac{\partial a_{2}}{\partial B},…)\ &=\left( \begin{array}{ccc} \frac{\partial a_{1}}{\partial b_{1}} & … & \frac{\partial a_{1}}{\partial b_{m}}\ \frac{\partial a_{2}}{\partial b_{1}} & … & \frac{\partial a_{2}}{\partial b_{m}}\ … & … & …\ \frac{\partial a_{n}}{\partial b_{1}} & … & \frac{\partial a_{n}}{\partial b_{m}}\ \end{array} \right) \end{aligned}A ∂B ∂A ​​=(a 1 ​(b 1 ​,b 2 ​,…b m ​),a 2 ​(b 1 ​,b 2 ​,…b m ​),…)=(∂B ∂a 1 ​​,∂B ∂a 2 ​​,…)=⎝⎛​∂b 1 ​∂a 1 ​​∂b 1 ​∂a 2 ​​…∂b 1 ​∂a n ​​​…………​∂b m ​∂a 1 ​​∂b m ​∂a 2 ​​…∂b m ​∂a n ​​​⎠⎞​​
Wow, see, 现在向量求导清晰多了。当然,不管你将求导展开成n × m n\times m n ×m形式的矩阵还是m × n m\times n m ×n的矩阵,只要在求导时统一,都没有关系。

DNN损失函数求导

神经网络的损失函数都是标量函数。常见的损失有L1、L2范数损失、啦啦啦的。以L2范数损失为例,一般的全连接神经网络损失函数:
ϵ = 1 2 ∣ ∣ σ ( a L ) − y ∣ ∣ 2 @ E q . 1 \begin{array}{ccc} \epsilon = \frac{1}{2} ||\sigma (\bf{a^{L}})-\bf{y}||^{2} & @Eq.1 \end{array}ϵ=2 1 ​∣∣σ(a L )−y ∣∣2 ​@Eq .1 ​
其中a L = W L ⋅ a L − 1 + b L , a L , b L ∈ R N L , W L ∈ R N L × R N L − 1 \bf{a^{L}}=\bf{W^{L}}\cdot\bf{a^{L-1}}+\bf{b^{L}}, \bf{a^{L}},\bf{b^{L}}\in R^{N_{L}},\bf{W^{L}}\in R^{N_{L}}\times R^{N_{L-1}}a L =W L ⋅a L −1 +b L ,a L ,b L ∈R N L ​,W L ∈R N L ​×R N L −1 ​表示第L层激活函数的结果,y \bf{y}y表示Ground truth。Now,如何求解损失函数对W L , b L \bf{W^{L}}, \bf{b^{L}}W L ,b L的梯度呢?We only have to expand Eq.1 to the following expression 啦啦啦:
ϵ = 1 2 Σ i N [ σ ( Σ j M W i j L ⋅ a j L − 1 + b i L ) − y i ] 2 ∂ ϵ ∂ W x y = [ σ ( Σ j M W x j L ⋅ a j L − 1 + b x L ) − y x ] × σ ′ ( Σ j M W x j L ⋅ a j L − 1 + b x L ) × a y L − 1 s o , ∂ ϵ ∂ W L = { ∂ ϵ ∂ W x y L } x : 1 → N , y : 1 → M T h e n s u r p r i s i n g l y = [ σ ( W L ⋅ a L − 1 + b L ) ⊙ σ ′ ( W L ⋅ a L − 1 + b L ) ] ⋅ ( a L − 1 ) T \begin{aligned} \epsilon &= \frac{1}{2}\Sigma_{i}^{N} [\sigma(\Sigma_{j}^{M}W_{ij}^{L}\cdot a^{L-1}{j}+b{i}^{L})-y_{i}]^{2}\ \frac{\partial\epsilon}{\partial W_{xy}} &= [\sigma(\Sigma_{j}^{M}W_{xj}^{L}\cdot a^{L-1}{j}+b{x}^{L})-y_{x}]\times\sigma'(\Sigma_{j}^{M}W_{xj}^{L}\cdot a^{L-1}{j}+b{x}^{L})\times a_{y}^{L-1}\ so, \frac{\partial\epsilon}{\partial \bf{W^{L}}}&={\frac{\partial\epsilon}{\partial W_{xy}^{L}}}{x:1\rightarrow N,y:1\rightarrow M}\ &Then\ surprisingly\ &=[\sigma(\bf{W^{L}}\cdot a^{L-1}+\bf{b^{L}})\odot\sigma'(\bf{W^{L}}\cdot a^{L-1}+\bf{b^{L}})]\cdot (a^{L-1})^{T} \end{aligned}ϵ∂W x y ​∂ϵ​so ,∂W L ∂ϵ​​=2 1 ​Σi N ​[σ(Σj M ​W ij L ​⋅a j L −1 ​+b i L ​)−y i ​]2 =[σ(Σj M ​W x j L ​⋅a j L −1 ​+b x L ​)−y x ​]×σ′(Σj M ​W x j L ​⋅a j L −1 ​+b x L ​)×a y L −1 ​={∂W x y L ​∂ϵ​}x :1 →N ,y :1 →M ​T h e n s u r p r i s in g l y =[σ(W L ⋅a L −1 +b L )⊙σ′(W L ⋅a L −1 +b L )]⋅(a L −1 )T ​
同样的,损失函数对偏置求导得到:
∂ ϵ ∂ b L = [ σ ( W L ⋅ a L − 1 + b L ) ⊙ σ ′ ( W L ⋅ a L − 1 + b L ) ] \frac{\partial\epsilon}{\partial \bf{b^{L}}}=[\sigma(\bf{W^{L}}\cdot a^{L-1}+\bf{b^{L}})\odot\sigma'(\bf{W^{L}}\cdot a^{L-1}+\bf{b^{L}})]∂b L ∂ϵ​=[σ(W L ⋅a L −1 +b L )⊙σ′(W L ⋅a L −1 +b L )]
通常我们用z L = W L ⋅ a L − 1 + b L \bf{z^{L}}=\bf{W^{L}}\cdot a^{L-1}+\bf{b^{L}}z L =W L ⋅a L −1 +b L表示未激活输出,δ L = σ ( z L ) ⊙ σ ′ ( z L ) \bf{\delta^{L}}=\sigma(\bf{z^{L}})\odot\sigma'(\bf{z^{L}})δL =σ(z L )⊙σ′(z L )表示Hadamard乘积结果。那么损失函数对最后一层神经网络参数的梯度就是:
∂ ϵ ∂ W L = δ L ⋅ ( a L − 1 ) T ∂ ϵ ∂ b L = δ L \begin{aligned} \frac{\partial\epsilon}{\partial \bf{W^{L}}}&=\bf{\delta^{L}}\cdot (\bf{a^{L-1}})^{T}\ \frac{\partial\epsilon}{\partial \bf{b^{L}}}&=\bf{\delta^{L}} \end{aligned}∂W L ∂ϵ​∂b L ∂ϵ​​=δL ⋅(a L −1 )T =δL ​
桥豆麻嘚,好像推出来了什么不得了的东西。如果是对第h h h层的参数求导,那么有:
∂ ϵ ∂ W H = δ H ⋅ ( a H − 1 ) T @ E q . 2 ∂ ϵ ∂ b H = δ H @ E q . 3 w h e r e δ H = ∂ ϵ ∂ Z L ⋅ ∂ Z L ∂ Z L − 1 . . . ∂ Z H + 1 ∂ Z H \begin{aligned} \frac{\partial\epsilon}{\partial \bf{W^{H}}}&=\bf{\delta^{H}}\cdot (\bf{a^{H-1}})^{T}\ \ \ \ \ @Eq.2\ \frac{\partial\epsilon}{\partial \bf{b^{H}}}&=\bf{\delta^{H}}\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ @Eq.3\\ where\ \bf{\delta^{H}}&=\frac{\partial\epsilon}{\partial \bf{Z^{L}}}\cdot\frac{\partial\bf{Z^{L}}}{\partial \bf{Z^{L-1}}}…\frac{\partial\bf{Z^{H+1}}}{\partial \bf{Z^{H}}} \end{aligned}∂W H ∂ϵ​∂b H ∂ϵ​w h ere δH ​=δH ⋅(a H −1 )T @Eq.2 =δH @Eq.3 =∂Z L ∂ϵ​⋅∂Z L −1 ∂Z L ​…∂Z H ∂Z H +1 ​​
clearly,求导的关键在于求解后一层非激活输出对前一层非激活输出的导数,即:
∂ Z L ∂ Z L − 1 = { ∂ Z i L ∂ Z j L − 1 } ∂ Z i L ∂ Z j L − 1 = W i j L ⋅ a j L w h i c h i n d i c a t e s ∂ Z L ∂ Z L − 1 = W L ⋅ d i a g ( a L − 1 ) w h e r e d i a g ( a L − 1 ) = ( a 1 L − 1 0 . . . 0 a 2 L − 1 . . . . . . . . . . . . . . . . . . a N L − 1 L − 1 ) \begin{aligned} \frac{\partial\bf{Z^{L}}}{\partial \bf{Z^{L-1}}}&={\frac{\partial Z^{L}
{i}}{\partial Z^{L-1}{j}}}\ \frac{\partial Z^{L}{i}}{\partial Z^{L-1}{j}}&=W^{L}{ij}\cdot a^{L}{j}\ which indicates\ \frac{\partial\bf{Z^{L}}}{\partial \bf{Z^{L-1}}}&=\bf{W^{L}}\cdot diag(\bf{a^{L-1}})\ where\ diag(\bf{a^{L-1}})&=\left(\begin{array}{ccc} a{1}^{L-1} & 0 & …\ 0 & a_{2}^{L-1} & …\ …& … & … \ … & … & a_{N^{L-1}}^{L-1}\ \end{array}\right) \end{aligned}∂Z L −1 ∂Z L ​∂Z j L −1 ​∂Z i L ​​w hi c hin d i c a t es ∂Z L −1 ∂Z L ​w h ere d ia g (a L −1 )​={∂Z j L −1 ​∂Z i L ​​}=W ij L ​⋅a j L ​=W L ⋅diag (a L −1 )=⎝⎛​a 1 L −1 ​0 ……​0 a 2 L −1 ​……​………a N L −1 L −1 ​​⎠⎞​​

将上式代入至δ H \delta^{H}δH中,就可以得到:
δ H = ( ∂ Z L ∂ Z L − 1 . . . ∂ Z H + 1 ∂ Z H ) T ⋅ δ L = Π i : L → H T ( W i ⋅ d i a g ( a i − 1 ) ) ⋅ δ L @ E q . 4 \begin{aligned} \delta^{H} &= (\frac{\partial\bf{Z^{L}}}{\partial \bf{Z^{L-1}}}…\frac{\partial\bf{Z^{H+1}}}{\partial \bf{Z^{H}}})^{T}\cdot\delta^{L}\ &= \Pi^{T}{i:L\rightarrow H}(\bf{W^{i}}\cdot diag(\bf{a^{i-1}}))\cdot\delta^{L} \ \ \ \ \ \ \ \ \ \ \ \ @Eq.4 \end{aligned}δH ​=(∂Z L −1 ∂Z L ​…∂Z H ∂Z H +1 ​)T ⋅δL =Πi :L →H T ​(W i ⋅diag (a i −1 ))⋅δL @Eq.4 ​
to analyze it from the dimension aspect, Eq.4的维度信息是:
[ ( N L ∗ N L − 1 ) × ( N L − 1 ∗ N L − 2 ) × . . . ( N H + 1 ∗ N H ) ] T × ( N L ∗ 1 ) = ( N H ∗ 1 ) [(N^{L}N^{L-1})\times(N^{L-1}N^{L-2})\times…(N^{H+1}N^{H})]^{T}\times(N^{L}1)=(N^{H}*1)[(N L ∗N L −1 )×(N L −1 ∗N L −2 )×…(N H +1 ∗N H )]T ×(N L ∗1 )=(N H ∗1 )
那么就不难得到任意一层的参数梯度表达式:
∂ ϵ ∂ W H = Π i : L → H T ( W i ⋅ d i a g ( a i − 1 ) ) ⋅ δ L ⋅ ( a H − 1 ) T ∂ ϵ ∂ b H = Π i : L → H T ( W i ⋅ d i a g ( a i − 1 ) ) ⋅ δ L \begin{aligned} \frac{\partial\epsilon}{\partial \bf{W^{H}}}&=\Pi^{T}
{i:L\rightarrow H}(\bf{W^{i}}\cdot diag(\bf{a^{i-1}}))\cdot\delta^{L}\cdot (\bf{a^{H-1}})^{T}\ \frac{\partial\epsilon}{\partial \bf{b^{H}}}&=\Pi^{T}_{i:L\rightarrow H}(\bf{W^{i}}\cdot diag(\bf{a^{i-1}}))\cdot\delta^{L} \end{aligned}∂W H ∂ϵ​∂b H ∂ϵ​​=Πi :L →H T ​(W i ⋅diag (a i −1 ))⋅δL ⋅(a H −1 )T =Πi :L →H T ​(W i ⋅diag (a i −1 ))⋅δL ​

Original: https://blog.csdn.net/qq_40840924/article/details/124454175
Author: 粉粉Shawn
Title: 一般神经网络(DNN)反向传播过程

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/691755/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球