ContrastNER: Contrastive-based Prompt Tuning for Few-shot NER
Chapter, Peer reviewed, Conference object
Accepted version
Permanent lenke
https://hdl.handle.net/11250/3114381Utgivelsesdato
2023Metadata
Vis full innførselSamlinger
Originalversjon
https://doi.org/10.1109/COMPSAC57700.2023.00038Sammendrag
Prompt-based language models have produced encouraging results in numerous applications, including Named Entity Recognition (NER) tasks. NER aims to identify entities in a sentence and provide their types. However, the strong performance of most available NER approaches is heavily dependent on the design of discrete prompts and a verbalizer to map the model-predicted outputs to entity categories, which are complicated undertakings. To address these challenges, we present ContrastNER, a prompt-based NER framework that employs both discrete and continuous tokens in prompts and uses a contrastive learning approach to learn the continuous prompts and forecast entity types. The experimental results demonstrate that ContrastNER obtains competitive performance to the state-of-the-art NER methods in high-resource settings and outperforms the state-of-the-art models in low-resource circumstances without requiring extensive manual prompt engineering and verbalizer design.