Software evolvability - empirically discovered evolvability issues and human evaluations

Thumbnail Image
Journal Title
Journal ISSN
Volume Title
Doctoral thesis (article-based)
Checking the digitized thesis and permission for publishing
Instructions for the author
Degree programme
Verkkokirja (630 KB, 66 s.)
TKK dissertations, 160
Evolution of a software system can take decades and can cost up to several billion Euros. Software evolvability refers to how easily software is understood, modified, adapted, corrected, and developed. It has been estimated that software evolvability can explain 25% to 38% of the costs of software evolution. Prior research has presented software evolvability criteria and quantified the criteria utilizing source code metrics. However, the empirical observations of software evolvability issues and human evaluations of them have largely been ignored. This dissertation empirically studies human evaluations and observations of software evolvability issues. This work utilizes both qualitative and quantitative research methods. Empirical data was collected from controlled experiments with student subjects, and by observing issues that were discovered in real industrial settings. This dissertation presents a new classification for software evolvability issues. The information provided by the classification is extended by the detailed analysis of evolvability issues that have been discovered in code reviews and their distributions to different issue types. Furthermore, this work studies human evaluations of software evolvability; more specifically, it focuses on the interrater agreement of the evaluations, the affect of demographics, the evolvability issues that humans find to be most significant, as well as the relationship between human evaluation and source code metrics based evaluations. The results show that code review that is performed after light functional testing reveals three times as many evolvability issues as functional defects. We also discovered a new evolvability issue called "solution approach", which indicates a need to rethink the current solution rather than reorganize it. For solution approach issues, we are not aware of any research that presents or discusses such issues in the software engineering domain. We found weak evidence that software evolvability evaluations are more affected by a person's role in the organization and the relationship (authorship) to the code than by education and work experience. Comparison of code metrics and human evaluations revealed that metrics cannot detect all human found evolvability issues.
software evolvability issue, refactoring, controlled experiment, observation, interrater agreement, code review
  • [Publication 1]: Mika Mäntylä, Jari Vanhanen, and Casper Lassenius. 2003. A taxonomy and an initial empirical study of bad smells in code. In: Proceedings of the 19th International Conference on Software Maintenance (ICSM 2003). Amsterdam, The Netherlands. 22-26 September 2003, pages 381-384. © 2003 IEEE. By permission.
  • [Publication 2]: Mika V. Mäntylä and Casper Lassenius. 2006. Subjective evaluation of software evolvability using code smells: An empirical study. Empirical Software Engineering, volume 11, number 3, pages 395-431.
  • [Publication 3]: Mika V. Mäntylä. 2005. An experiment on subjective evolvability evaluation of object-oriented software: Explaining factors and interrater agreement. In: Proceedings of the 4th International Symposium on Empirical Software Engineering (ISESE 2005). Noosa Heads, Queensland, Australia. 17-18 November 2005, 10 pages. © 2005 IEEE. By permission.
  • [Publication 4]: Mika V. Mäntylä and Casper Lassenius. 2006. Drivers for software refactoring decisions. In: Guilherme Horta Travassos, José Carlos Maldonado, and Claes Wohlin (editors). Proceedings of the 5th ACM/IEEE International Symposium on Empirical Software Engineering (ISESE 2006). Rio de Janeiro, Brazil. 21-22 September 2006, pages 297-306.
  • [Publication 5]: Mika V. Mäntylä and Casper Lassenius. 2009. What types of defects are really discovered in code reviews? IEEE Transactions on Software Engineering, accepted for publication. © 2009 IEEE. By permission.