positional encoding

scroll ↓ to Resources

Note

  • Adds information about the position of each token in the sequence to help the transformer understand word order, because attention is position-invariant
  • see decoder architecture
  • in simplest form, position encoding vectors are added to input sequences

Rotary Position Embeddings (RoPE)

Resources


table file.inlinks, filter(file.outlinks, (x) => !contains(string(x), ".jpg") AND !contains(string(x), ".pdf") AND !contains(string(x), ".png")) as "Outlinks" from [[]] and !outgoing([[]])  AND -"Changelog"