Back
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding - ACL Anthology
webaclanthology.org·aclanthology.org/N19-1423
Data Status
Not fetched
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| Deep Learning Revolution Era | Historical | 44.0 |
Cached Content Preview
HTTP 200Fetched Feb 22, 202612 KB
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding - ACL Anthology BERT : Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin ,
Ming-Wei Chang ,
Kenton Lee ,
Kristina Toutanova
Correct Metadata for
Use this form to create a GitHub issue with structured data describing the correction. You will need a GitHub account.
Once you create that issue, the correction will be reviewed by a staff member. ⚠️ Mobile Users: Submitting this form to create a new issue will only work with github.com, not the GitHub Mobile app. Important : The Anthology treat PDFs as authoritative. Please use this form only to correct data
that is out of line with the PDF. See our corrections
guidelines if you need to change the PDF. Title
Adjust the title. Retain tags such as
<fixed-case>.
Authors
Adjust author names and order to match the
PDF. Add Author Abstract
Correct abstract if needed. Retain XML formatting tags such as <tex-math>. You may use <b>...</b> for bold , <i>...</i> for italic , and <url>...</url> for URLs.
Verification against PDF
Ensure that the new title/authors match the snapshot below. (If there
is no snapshot or it is too small, consult the PDF .) Authors concatenated from the text boxes above:
ALL author names match the snapshot above—including
middle initials, hyphens, and accents. Create GitHub issue for staff review Abstract
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement). Anthology ID: N19-1423 Volume: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) Month: June Year: 2019 Address: Minneapolis, Minnesota Editors: Jill Burstein ,
Christy Doran ,
Thamar Solorio Venue: NAACL SIG: Publisher: Association for Computational Linguistics Note: Pages: 4171–4186 Language: URL: https://aclanthology.org/N19-1423/ DOI: 10.18653/v1/N19-1423 Award: Best
... (truncated, 12 KB total)Resource ID:
80b6364ef8bbf595 | Stable ID: MTZhZDlhZm