티스토리 뷰

Daliy Note

Transformer

Raziel 2021. 8. 20. 12:14

 

원문 : Attention is All You Need (NIPS 2017)

참조: illustrated-transformer

 

 

def scaled_dot_product_attention(q, k, v, mask):
  """Calculate the attention weights.
  q, k, v must have matching leading dimensions.
  k, v must have matching penultimate dimension, i.e.: seq_len_k = seq_len_v.
  The mask has different shapes depending on its type(padding or look ahead) 
  but it must be broadcastable for addition.
  
  Args:
    q: query shape == (..., seq_len_q, depth)
    k: key shape == (..., seq_len_k, depth)
    v: value shape == (..., seq_len_v, depth_v)
    mask: Float tensor with shape broadcastable 
          to (..., seq_len_q, seq_len_k). Defaults to None.
    
  Returns:
    output, attention_weights
  """

  matmul_qk = tf.matmul(q, k, transpose_b=True)  # (..., seq_len_q, seq_len_k)
  
  # scale matmul_qk
  dk = tf.cast(tf.shape(k)[-1], tf.float32)
  scaled_attention_logits = matmul_qk / tf.math."""input your code(A)"""(dk)

  # add the mask to the scaled tensor.
  if mask is not None:
    scaled_attention_logits += (mask * -1e9)  

  # softmax is normalized on the last axis (seq_len_k) so that the scores
  # add up to 1.
  attention_weights = tf.nn."""input your code(B)"""(scaled_attention_logits, axis=-1)  # (..., seq_len_q, seq_len_k)

  output = tf."""input your code(C)"""(attention_weights, v)  # (..., seq_len_q, depth_v)

  return output, attention_weights

(sqrt, softmax, matmul)

 

 

'Daliy Note' 카테고리의 다른 글

Explainable AI Cheat Sheet  (0) 2021.08.20
BERT & ELMo  (0) 2021.08.20
Natural Language Review  (0) 2021.08.20
Review  (0) 2021.08.19
Language Modeling with RNN  (0) 2021.08.18
댓글