Skip to content

Latest commit

 

History

History
140 lines (96 loc) · 14.4 KB

named_entity_recognition.md

File metadata and controls

140 lines (96 loc) · 14.4 KB

Named entity recognition

Named entity recognition (NER) is the task of tagging entities in text with their corresponding type. Approaches typically use BIO notation, which differentiates the beginning (B) and the inside (I) of entities. O is used for non-entity tokens.

Example:

Mark Watney visited Mars
B-PER I-PER O B-LOC

CoNLL 2003 (English)

The CoNLL 2003 NER task consists of newswire text from the Reuters RCV1 corpus tagged with four different entity types (PER, LOC, ORG, MISC). Models are evaluated based on span-based F1 on the test set. ♦ used both the train and development splits for training.

Model F1 Paper / Source Code
ACE + document-context (Wang et al., 2021) 94.6 Automated Concatenation of Embeddings for Structured Prediction Official
LUKE (Yamada et al., 2020) 94.3 LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention Official
CL-KL (Wang et al., 2021) 93.85 Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning Official
XLNet-GCN (Tran et al., 2021) 93.82 Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning Official
InferNER (Moemmur et al., 2021) 93.76 InferNER: an attentive model leveraging the sentence-level information for Named Entity Recognition in Microblogs
ACE (Wang et al., 2021) 93.6 Automated Concatenation of Embeddings for Structured Prediction Official
CNN Large + fine-tune (Baevski et al., 2019) 93.5 Cloze-driven Pretraining of Self-attention Networks
RNN-CRF+Flair 93.47 Improved Differentiable Architecture Search for Language Modeling and Named Entity Recognition
CrossWeigh + Flair (Wang et al., 2019)♦ 93.43 CrossWeigh: Training Named Entity Tagger from Imperfect Annotations Official
LSTM-CRF+ELMo+BERT+Flair 93.38 Neural Architectures for Nested NER through Linearization Official
Flair embeddings (Akbik et al., 2018)♦ 93.09 Contextual String Embeddings for Sequence Labeling Flair framework
BERT Large (Devlin et al., 2018) 92.8 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
CVT + Multi-Task (Clark et al., 2018) 92.61 Semi-Supervised Sequence Modeling with Cross-View Training Official
BERT Base (Devlin et al., 2018) 92.4 BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
BiLSTM-CRF+ELMo (Peters et al., 2018) 92.22 Deep contextualized word representations AllenNLP Project AllenNLP GitHub
Peters et al. (2017) ♦ 91.93 Semi-supervised sequence tagging with bidirectional language models
CRF + AutoEncoder (Wu et al., 2018) 91.87 Evaluating the Utility of Hand-crafted Features in Sequence Labelling Official
Bi-LSTM-CRF + Lexical Features (Ghaddar and Langlais 2018) 91.73 Robust Lexical Features for Improved Neural Network Named-Entity Recognition Official
BiLSTM-CRF + IntNet (Xin et al., 2018) 91.64 Learning Better Internal Structure of Words for Sequence Labeling
Chiu and Nichols (2016) ♦ 91.62 Named entity recognition with bidirectional LSTM-CNNs
HSCRF (Ye and Ling, 2018) 91.38 Hybrid semi-Markov CRF for Neural Sequence Labeling HSCRF
IXA pipes (Agerri and Rigau 2016) 91.36 Robust multilingual Named Entity Recognition with shallow semi-supervised features Official
NCRF++ (Yang and Zhang, 2018) 91.35 NCRF++: An Open-source Neural Sequence Labeling Toolkit NCRF++
LM-LSTM-CRF (Liu et al., 2018) 91.24 Empowering Character-aware Sequence Labeling with Task-Aware Neural Language Model LM-LSTM-CRF
Yang et al. (2017) ♦ 91.26 Transfer Learning for Sequence Tagging with Hierarchical Recurrent Networks
Ma and Hovy (2016) 91.21 End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF
LSTM-CRF (Lample et al., 2016) 90.94 Neural Architectures for Named Entity Recognition

CoNLL++

This is a cleaner version of the CoNLL 2003 NER task, where about 5% of instances in the test set are corrected due to mislabelling. The training set is left untouched. Models are evaluated based on span-based F1 on the test set. ♦ used both the train and development splits for training.

Links: CoNLL++ (including direct download links for data)

Model F1 Paper / Source Code
CL-KL (Wang et al., 2021) 94.81 Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning Official
CrossWeigh + Flair (Wang et al., 2019)♦ 94.28 CrossWeigh: Training Named Entity Tagger from Imperfect Annotations Official
Flair embeddings (Akbik et al., 2018)♦ 93.89 Contextual String Embeddings for Sequence Labeling Flair framework
BiLSTM-CRF+ELMo (Peters et al., 2018) 93.42 Deep contextualized word representations AllenNLP Project AllenNLP GitHub
Ma and Hovy (2016) 91.87 End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF
LSTM-CRF (Lample et al., 2016) 91.47 Neural Architectures for Named Entity Recognition

Long-tail emerging entities

The WNUT 2017 Emerging Entities task operates over a wide range of English text and focuses on generalisation beyond memorisation in high-variance environments. Scores are given both over entity chunk instances, and unique entity surface forms, to normalise the biasing impact of entities that occur frequently.

Feature Train Dev Test
Posts 3,395 1,009 1,287
Tokens 62,729 15,733 23,394
NE tokens 3,160 1,250 1,589

The data is annotated for six classes - person, location, group, creative work, product and corporation.

Links: WNUT 2017 Emerging Entity task page (including direct download links for data and scoring script)

Model F1 F1 (surface form) Paper / Source
InferNER (Moemmur et al., 2021) 50.52 --- InferNER: an attentive model leveraging the sentence-level information for Named Entity Recognition in Microblogs
CrossWeigh + Flair (Wang et al., 2019) 50.03 CrossWeigh: Training Named Entity Tagger from Imperfect Annotations Official
Flair embeddings (Akbik et al., 2018) 49.59 Pooled Contextualized Embeddings for Named Entity Recognition / Flair framework
Aguilar et al. (2018) 45.55 Modeling Noisiness to Recognize Named Entities using Multitask Neural Networks on Social Media
SpinningBytes 40.78 39.33 Transfer Learning and Sentence Level Features for Named Entity Recognition on Tweets

Ontonotes v5 (English)

The Ontonotes corpus v5 is a richly annotated corpus with several layers of annotation, including named entities, coreference, part of speech, word sense, propositions, and syntactic parse trees. These annotations are over a large number of tokens, a broad cross-section of domains, and 3 languages (English, Arabic, and Chinese). The NER dataset (of interest here) includes 18 tags, consisting of 11 types (PERSON, ORGANIZATION, etc) and 7 values (DATE, PERCENT, etc), and contains 2 million tokens. The common datasplit used in NER is defined in Pradhan et al 2013 and can be found here.

Model F1 Paper / Source Code
BERT+KVMN (Nie et al., 2020) 90.32 Improving Named Entity Recognition with Attentive Ensemble of Syntactic Information Official
Flair embeddings (Akbik et al., 2018) 89.71 Contextual String Embeddings for Sequence Labeling Official
CVT + Multi-Task (Clark et al., 2018) 88.81 Semi-Supervised Sequence Modeling with Cross-View Training Official
Bi-LSTM-CRF + Lexical Features (Ghaddar and Langlais 2018) 87.95 Robust Lexical Features for Improved Neural Network Named-Entity Recognition Official
BiLSTM-CRF (Strubell et al, 2017) 86.99 Fast and Accurate Entity Recognition with Iterated Dilated Convolutions Official
Iterated Dilated CNN (Strubell et al, 2017) 86.84 Fast and Accurate Entity Recognition with Iterated Dilated Convolutions Official
Chiu and Nichols (2016) 86.28 Named entity recognition with bidirectional LSTM-CNNs
Joint Model (Durrett and Klein 2014) 84.04 A Joint Model for Entity Analysis: Coreference, Typing, and Linking
Averaged Perceptron (Ratinov and Roth 2009) 83.45 Design Challenges and Misconceptions in Named Entity Recognition (These scores reported in (Durrett and Klein 2014)) Official

Few-NERD

Few-NERD is a large-scale, fine-grained manually annotated named entity recognition dataset, which contains 8 coarse-grained types, 66 fine-grained types, 188,200 sentences, 491,711 entities and 4,601,223 tokens. Three benchmark tasks are built:

  • Few-NERD (SUP) is a standard NER task;
  • Few-NERD (INTRA) is a few-shot NER task across different coarse-grained types;
  • Few-NERD (INTER) is a few-shot NER task within coarse-grained types.

Website: Few-NERD page;

Download & code: https://github.com/thunlp/Few-NERD

Results on Few-NERD (SUP)

Model F1 Paper / Source Code
BERT-Tagger (Ding et al., 2021) 68.88 Few-NERD: A Few-shot Named Entity Recognition Dataset Official

Go back to the README