Vis enkel innførsel

dc.contributor.advisorMello, Gustavo
dc.contributor.advisorYazidi, Anis
dc.contributor.authorAaby, Pernille
dc.date.accessioned2022-09-13T08:44:36Z
dc.date.available2022-09-13T08:44:36Z
dc.date.issued2022
dc.identifier.urihttps://hdl.handle.net/11250/3017423
dc.description.abstractNowadays, contextual language models can solve a wide range of language tasks such as text classification, question answering and machine translation. These tasks often require the model to have knowledge about general language understanding, like how words relate to each other. This understanding is acquired through a pre-training stage where the model learn features from raw text data. However, we do not fully understand all the features the model learns through this pre-training stage. Does there exists information yet to be utilized? Can we make predictions more explainable? This thesis aims to extend the knowledge of what features a language model have acquired. We have chosen the model architecture BERT and have analyzed its word representations from two feature perspectives. The first perspective investigated similarities and dissimilarities between English and Norwegian word representations by evaluating their performance on a word retrieval task and a language detection task. The second perspective analyzed how a word representation changes if the word stands in the wrong context or if the word was inferred through the model without context.en_US
dc.language.isoengen_US
dc.publisherOsloMet - storbyuniversiteteten_US
dc.relation.ispartofseriesACIT;2022
dc.subjectMultilingual modelsen_US
dc.subjectWord embeddingsen_US
dc.titleExploring multilingual and contextual properties in word representations from BERTen_US
dc.typeMaster thesisen_US
dc.description.versionpublishedVersionen_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel