Evaluating probabilistic software development effort estimates: Maximizing informativeness subject to calibration
Journal article, Peer reviewed
Published version
Date
2019-08-08Metadata
Show full item recordCollections
Original version
Jørgensen M. Evaluating probabilistic software development effort estimates: Maximizing informativeness subject to calibration. Information and Software Technology. 2019;115:93-96 http://dx.doi.org/10.1016/j.infsof.2019.08.006Abstract
Context: Probabilistic effort estimates inform about the uncertainty and may give useful input to plans, budgets and investment analyses. Objective & method: This paper introduces, motivates and illustrates two principles on how to evaluate the accuracy and other performance criteria of probabilistic effort estimates in software development contexts. Results: The first principle emphasizes a consistency between the estimation error measure and the loss function of the chosen type of
probabilistic single point effort estimates. The second principle points at the importance of not just measuring calibration, but also informativeness of estimated prediction intervals and distributions. The relevance of the evaluation principles is illustrated by a performance evaluation of estimates from twenty-eight software professionals using two different uncertainty assessment methods to estimate the effort of the same thirty software maintenance tasks.