Named Entity Recognition: Difference between revisions

From FDHwiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
== Discussion of the State of the Art ==
== Discussion of the State of the Art ==
State-of-the-art implementations of Named Entity Recognition (NER) heavily rely on algorithms based Hidden Markov Model (HMM) <ref>https://en.wikipedia.org/wiki/Hidden_Markov_model</ref> and on Conditional Random Field (CRF) <ref>https://en.wikipedia.org/wiki/Conditional_random_field</ref>.
Implementations of Named Entity Recognition (NER) relied for a long time on algorithms based on the Hidden Markov Model (HMM) <ref>https://dl.acm.org/citation.cfm?id=1119204</ref><ref></ref> and on Conditional Random Field (CRF) <ref>https://en.wikipedia.org/wiki/Conditional_random_field</ref>.
For instance, the widely used Stanford Named Entity Recognizer <ref>https://nlp.stanford.edu/software/CRF-NER.html</ref> uses CRF.
For instance, the widely used Stanford Named Entity Recognizer <ref>https://nlp.stanford.edu/software/CRF-NER.html</ref> uses CRF.
In recent years however the advancement in both GPU technology and deep learning techniques triggered the advent of Long short-term memory <ref>https://en.wikipedia.org/wiki/Long_short-term_memory</ref> neural network (LSTMNN) architectures, which are often used in conjunction with CRF to obtain state-of-the-art-performance <ref>https://arxiv.org/abs/1603.01360</ref><ref>https://arxiv.org/abs/1508.01991</ref> and provide a model which has become a fundamental feature for major companies according to <ref>https://en.wikipedia.org/wiki/Long_short-term_memory#History</ref>. LSTMNN are therefore currently preferred to HMM.  
In recent years however the advancement in both GPU technology and deep learning techniques triggered the advent of Long short-term memory <ref>https://en.wikipedia.org/wiki/Long_short-term_memory</ref> neural network (LSTMNN) architectures, which are often used in conjunction with CRF to obtain state-of-the-art-performance <ref>https://arxiv.org/abs/1603.01360</ref><ref>https://arxiv.org/abs/1508.01991</ref> and provide a model which has become a fundamental feature for major companies according to <ref>https://en.wikipedia.org/wiki/Long_short-term_memory#History</ref>. LSTMNN are therefore currently preferred to HMM.  

Revision as of 12:38, 3 November 2017

Discussion of the State of the Art

Implementations of Named Entity Recognition (NER) relied for a long time on algorithms based on the Hidden Markov Model (HMM) [1]Cite error: Invalid <ref> tag; refs with no name must have content and on Conditional Random Field (CRF) [2]. For instance, the widely used Stanford Named Entity Recognizer [3] uses CRF. In recent years however the advancement in both GPU technology and deep learning techniques triggered the advent of Long short-term memory [4] neural network (LSTMNN) architectures, which are often used in conjunction with CRF to obtain state-of-the-art-performance [5][6] and provide a model which has become a fundamental feature for major companies according to [7]. LSTMNN are therefore currently preferred to HMM. As a last note, a recent paper [8] introduces the possibility to use Iterated Dilated Convolutional Neural Networks (ID-CNNs) in place of LSTMNN to drastically improve computation time through parallelization while keeping the same level of accuracy, which suggests ID-CNNs could be the next step in improving NER. Matters of great concern in NER as of now include training data scarcity and inter-domain generalization [9]. In order to be efficient on a language domain, Current NER systems need large labeled datasets related to that domain [10]. This training data isn’t available for all language domains, which leads to the impossibility of applying NER efficiently to them. Furthermore, if a language domain doesn’t follow strict language conventions and allows for a wide use of the language, then the model will fail to generalize due to excessive heterogeneity. Examples of such domains are Sport and Finance. This is the reason for which one of the big challenges is, as stated in [9], “adapt[ing] models learned on domains where large amounts of annotated training data are available to domains with scarce annotated data”.

Bibliography