Human Experts’ Perceptions of Auto-Generated Summarization Quality
Chapter, Conference object
Published version
View/ Open
Date
2023Metadata
Show full item recordCollections
Abstract
In this study we addressed automatic summarizations generated us-
ing modern artificial intelligence techniques. Several mathematical
methods for evaluating the performance of automatic summariza-
tion exist. Such methods are commonly used as they allow many test
cases to be assessed with little human effort as manual assessments
are challenging and time consuming. One question is whether the
output of such measures matches human perception of summa-
rization quality. In this study we document a study involving the
human evaluation of the automatic summarization of 22 academic
texts. The unique aspect of this study is that our participants had
strong familiarity with the texts as they had studied these texts in
depth. The results are quite varied but do not give the impression of
unanimous agreement that automatic summarizations are of high
quality and are trusted.