Please use this identifier to cite or link to this item:
|Title:||Robust comparison of similarity measures in analogy based software effort estimation|
|Keywords:||Computer Science;Decision Sciences|
|Abstract:||© 2017 IEEE. Analogy-based software effort estimation (ABE) is a widely-adopted method because of the accuracy it offered as well as its intuitiveness. ABE derives an estimated effort value for a new software project by adapting to the effort values of its similar past projects. Accurately measuring the level of similarity between software project cases is an important process of ABE in regards to whether the retrieved past similar projects are analogous to the new project. However, no one to the best of our knowledge has systematically evaluated and compared the similarity measures for the ABE process. In the present study, 6 similarity measures that have been most commonly appeared in the literatures in a 5-year timeframe up to the time of writing are systematically compared. Based on a comprehensive empirical experiment using 12 industrial datasets consisting of 952 project cases, together with 5 robust performance measures, and subject to a robust statistical test method, we found that simple similarity measures such as Euclidean and Manhattan similarity measures generally offer accurate estimation for software effort estimation datasets. Despite studies in other fields frequently discourage the use of these simple similarity measures, the results of the present study are otherwise supporting them as a crucial part of an ABE model.|
|Appears in Collections:||CMUL: Journal Articles|
Files in This Item:
There are no files associated with this item.
Items in CMUIR are protected by copyright, with all rights reserved, unless otherwise indicated.