Attention Is All You Need
paperAuthors
Credibility Rating
Good quality. Reputable source with community review or editorial standards, but less rigorous than peer-reviewed venues.
Rating inherited from publication venue: arXiv
Foundational paper introducing the Transformer architecture, which became the basis for large language models like GPT and Claude. Critical for understanding modern AI systems that are central to AI safety research.
Paper Details
Metadata
Abstract
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
Summary
This paper introduces the Transformer, a novel neural network architecture that relies entirely on attention mechanisms, eliminating the need for recurrence and convolutions used in previous sequence-to-sequence models. The Transformer achieves state-of-the-art results on machine translation benchmarks (28.4 BLEU on WMT 2014 English-to-German and 41.8 BLEU on English-to-French) while being significantly more parallelizable and requiring substantially less training time than existing models. The authors demonstrate the architecture's generalizability by successfully applying it to English constituency parsing tasks.
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Large Language Models | Concept | 62.0 |
| Dense Transformers | Concept | 58.0 |
Cached Content Preview
Provided proper attribution is provided, Google hereby grants permission to reproduce the tables and figures in this paper solely for use in journalistic or scholarly works.
Attention Is All You Need
\AND Ashish Vaswani
Google Brain
avaswani@google.com
&Noam Shazeer 1 1 footnotemark: 1
Google Brain
noam@google.com
&Niki Parmar 1 1 footnotemark: 1
Google Research
nikip@google.com
&Jakob Uszkoreit 1 1 footnotemark: 1
Google Research
usz@google.com
&Llion Jones 1 1 footnotemark: 1
Google Research
llion@google.com
&Aidan N. Gomez 1 1 footnotemark: 1
University of Toronto
aidan@cs.toronto.edu
&Łukasz Kaiser 1 1 footnotemark: 1
Google Brain
lukaszkaiser@google.com
&Illia Polosukhin 1 1 footnotemark: 1
illia.polosukhin@gmail.com
Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started the effort to evaluate this idea.
Ashish, with Illia, designed and implemented the first Transformer models and has been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head attention and the parameter-free position representation and became the other person involved in nearly every detail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and tensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and efficient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and implementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating our research.
Work performed while at Google Brain.Work performed while at Google Research.
Abstract
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
1 Introduction
Recurrent neural networks, long short-term memory
... (truncated, 46 KB total)a7468c6851652691 | Stable ID: sid_CGNSGfwf5N