Exploring Multilingual Word Embedding Alignments in BERT Models: A Case Study of English and Norwegian
Chapter, Peer reviewed, Conference object, Journal article
Accepted version
Date
2023Metadata
Show full item recordCollections
Original version
https://doi.org/10.1007/978-3-031-47994-6_4Abstract
Contextual language models, such as transformers, can solve a wide range of language tasks ranging from text classification to question answering and machine translation. Like many deep learning models, the performance heavily depends on the quality and amount of data available for training. This poses a problem for low-resource languages, such as Norwegian, that can not provide the necessary amount of training data. In this article, we investigate the use of multilingual models as a step toward overcoming the data sparsity problem for minority languages. In detail, we study how words are represented by multilingual BERT models across two languages of our interest: English and Norwegian. Our analysis shows that multilingual models similarly encode English-Norwegian word pairs. The multilingual model automatically aligns semantics across languages without supervision. Additionally, our analysis also shows that embedding a word encodes information about the language to which it belongs. We, therefore, believe that in pre-trained multilingual models' knowledge from one language can be transferred to another without direct supervision and help solve the data sparsity problem for minor languages.