[笔记][Transformer]Attention Is All You Need

transformer 原论文 Attention is all you need 阅读笔记
建议结合原文食用

Reference:
Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[J]. Advances in neural information processing systems, 2017, 30.

圈圈 2022/3/12 初稿 结合代码介绍了模型
以下为正文↓

Attention Is All You Need

Transformer

model architecture

( x 1 , . . . , x n ) → e n c o d e r ( z 1 , . . . , z n ) → d e c o d e r ( y 1 , . . . , y m ) (x_1,…,x_n)\xrightarrow{encoder} (z_1,…,z_n)\xrightarrow{decoder} (y_1,…,y_m)(x 1 ​,…,x n ​)e n c o d e r ​(z 1 ​,…,z n ​)d e c o d e r ​(y 1 ​,…,y m ​)

encoder and decoder stacks

encoder

composed of a stack of N = 6 N=6 N =6 identical layers, each layer has two sub-layers

  • multi-head self-attention mechanism x → S ( x ) x\to S(x)x →S (x ) residue connection followed by layer normalization
    x → L a y e r n o r m ( x + S ( x ) ) x\to Layernorm(x+S(x))x →L a y e r n o r m (x +S (x ))
  • simple position-wise fully connected feed-forward network +
    same residue connection followed by layer normalization

To facilitate these residual connections, all sub-layers in the model, as well as the embedding layers, produce outputs of dimension d m o d e l = 512 d_{model}=512 d m o d e l ​=5 1 2


class EncoderLayer(nn.Module):
    ''' Compose with two layers '''

    def __init__(self, d_model, d_inner, n_head, d_k, d_v, dropout=0.1):
        super(EncoderLayer, self).__init__()
        self.slf_attn = MultiHeadAttention(n_head, d_model, d_k, d_v, dropout=dropout)
        self.pos_ffn = PositionwiseFeedForward(d_model, d_inner, dropout=dropout)

    def forward(self, enc_input, slf_attn_mask=None):
        enc_output, enc_slf_attn = self.slf_attn(
            enc_input, enc_input, enc_input, mask=slf_attn_mask)
        enc_output = self.pos_ffn(enc_output)
        return enc_output, enc_slf_attn

decoder

also composed of a stack of N = 6 N=6 N =6 identical layers, but has three sub-layers each

the decoder inserts a third sub-layer, which performs multi-head attention over the output of the encoder stack.

[笔记][Transformer]Attention Is All You Need
class DecoderLayer(nn.Module):
    ''' Compose with three layers '''

    def __init__(self, d_model, d_inner, n_head, d_k, d_v, dropout=0.1):
        super(DecoderLayer, self).__init__()
        self.slf_attn = MultiHeadAttention(n_head, d_model, d_k, d_v, dropout=dropout)
        self.enc_attn = MultiHeadAttention(n_head, d_model, d_k, d_v, dropout=dropout)
        self.pos_ffn = PositionwiseFeedForward(d_model, d_inner, dropout=dropout)

    def forward(
            self, dec_input, enc_output,
            slf_attn_mask=None, dec_enc_attn_mask=None):
        dec_output, dec_slf_attn = self.slf_attn(
            dec_input, dec_input, dec_input, mask=slf_attn_mask)
        dec_output, dec_enc_attn = self.enc_attn(
            dec_output, enc_output, enc_output, mask=dec_enc_attn_mask)
        dec_output = self.pos_ffn(dec_output)
        return dec_output, dec_slf_attn, dec_enc_attn

attention

An attention function can be described as mapping a query and a set of key-value pairs to an output, where the query, keys, values, and output are all vectors. The output is computed as a weighted sum of the values, where the weight assigned to each value is computed by a compatibility function of the query with the corresponding key.

scaled dot-product attention

  • Q : packed queries query dim :d k d_k d k ​
  • K : packed keys key dim :d k d_k d k ​
  • V : packed values value dim :d v d_v d v ​

A t t e n t i o n ( Q , K , V ) = s o f t m a x ( Q K T d k ) V Attention(Q,K,V)=softmax(\dfrac{QK^T}{\sqrt{d_k}})V A t t e n t i o n (Q ,K ,V )=s o f t m a x (d k ​​Q K T ​)V

reason for scaling the dot products by 1 d k \frac{1}{\sqrt{d_k}}d k ​​1 ​:

While for small values of dk the two mechanisms perform similarly, additive attention outperforms dot product attention without scaling for larger values of d k d_k d k ​. We suspect that for large values of d k d_k d k ​, the dot products grow large in magnitude, pushing the softmax function into regions where it has extremely small gradients.

[笔记][Transformer]Attention Is All You Need

; multi-head attention

several attention layers in parallel + projection
M u l t i H e a d ( Q , K , V ) = C o n c a t ( h e a d 1 , . . . , h e a d h ) W O w h e r e h e a d i = A t t e n t i o n ( Q W i Q , K W i K , V W i V ) p r o j e c t i o n m a t r i c e s : W i Q ∈ R d m o d e l × d k , W i K ∈ R d m o d e l × d k , W i V ∈ R d m o d e l × d v , W O ∈ R h d v × d m o d e l MultiHead(Q,K,V)=Concat(head_1,…,head_h)W^O \ where\ head_i=Attention(QW^Q_i,KW^K_i,VW^V_i)\ projection\ matrices:\ W^Q_i\in\mathbb R^{d_{model}\times d_k},\ \ W^K_i\in\mathbb R^{d_{model}\times d_k},\ W^V_i\in\mathbb R^{d_{model}\times d_v},\ W^O\in\mathbb R^{hd_v\times d_{model}}M u l t i H e a d (Q ,K ,V )=C o n c a t (h e a d 1 ​,…,h e a d h ​)W O w h e r e h e a d i ​=A t t e n t i o n (Q W i Q ​,K W i K ​,V W i V ​)p r o j e c t i o n m a t r i c e s :W i Q ​∈R d m o d e l ​×d k ​,W i K ​∈R d m o d e l ​×d k ​,W i V ​∈R d m o d e l ​×d v ​,W O ∈R h d v ​×d m o d e l ​

position-wise feed-forward networks

F F N ( x ) = R e L U ( x W 1 + b 1 ) W 2 + b 2 FFN(x)=ReLU(xW_1+b_1)W_2+b_2 F F N (x )=R e L U (x W 1 ​+b 1 ​)W 2 ​+b 2 ​

2 full-connect layers

self.w_1 = nn.Linear(d_in, d_hid)
self.w_2 = nn.Linear(d_hid, d_in)

x = self.w_2(F.relu(self.w_1(x)))

embeddings and softmax

share the same weight matrix between the two embedding layers and the pre-softmax linear transformation

         self.trg_word_prj = nn.Linear(d_model, n_trg_vocab, bias=False)

         seq_logit = self.trg_word_prj(dec_output)
         if self.scale_prj:
            seq_logit *= self.d_model ** -0.5

softmax → \to → probability

positional encoding

Since our model contains no recurrence and no convolution, in order for the model to make use of the order of the sequence, we must inject some information about the relative or absolute position of the tokens in the sequence.

P E ( p o s , 2 i ) = sin ⁡ ( p o s / 1000 0 2 i / d m o d e l ) P E ( p o s , 2 i + 1 ) = cos ⁡ ( p o s / 1000 0 2 i / d m o d e l ) PE_{(pos,2i)}=\sin(pos/10000^{2i/d_{model}})\ PE_{(pos,2i+1)}=\cos(pos/10000^{2i/d_{model}})P E (p o s ,2 i )​=sin (p o s /1 0 0 0 0 2 i /d m o d e l ​)P E (p o s ,2 i +1 )​=cos (p o s /1 0 0 0 0 2 i /d m o d e l ​)

sinusoidal version, of course can use other version

        sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2])

        sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2])

We chose this function because we hypothesized it would allow the model to easily learn to attend by relative positions

why self-attention?

  • total computational complexity per layer
  • the amount of computation that can
    be parallelized
  • the path length between long-range dependencies in the network

As side benefit, self-attention could yield more interpretable models.

Original: https://blog.csdn.net/weixin_44834206/article/details/123448597
Author: 有点欠扁的圈圈
Title: [笔记][Transformer]Attention Is All You Need

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/531217/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球