[Publications] [Software Engineering Research Group] [Department of Computer Science] [University of British Columbia]
Evaluating Emerging Software Development Technologies:
Lessons Learned from Assessing Aspect-oriented Programming
Gail C. Murphy, Robert J. Walker, and
Elisa L.A. Baniassad
Technical Report TR-98-10, Department of Computer Science, University of British Columbia.
Abstract
Two of the most important and most difficult questions one
can ask about a new software development technique are whether the
technique is useful and whether the technique is usable. Various
flavours of empirical study are available to evaluate these questions,
including surveys, case studies, and experiments. These different
approaches have been used extensively in a number of domains,
including management science and human-computer interaction. A
growing number of software engineering researchers are using
experimental methods to statistically validate hypotheses about
relatively mature software development aids. Less guidance
is available for a developer of a new and evolving software
development technique who is attempting to determine, within some cost
bounds, if the technique shows some usefulness. We faced this
challenge when assessing a new programming technique called
aspect-oriented programming. To assess the technique, we chose to
apply both a case study approach and a series of four experiments
because we wanted to understand and characterize the kinds of
information that each approach might provide when studying a technique
that is in its infancy. Our experiences suggest some avenues for
further developing empirical methods aimed at evaluating software
engineering questions. For instance, guidelines on how different
observational techniques can be used as multiple sources of data would
be helpful when planning and conducting a case study. For the
experimental situation, more guidance is needed on how to balance the
precision of measurement with the realism necessary to investigate
programming issues. In this paper, we describe and critique the
evaluation methods we employed, and discuss the lessons we have
learned. These lessons are applicable to researchers attempting to
assess other new programming techniques that are in an early stage of
development.
IEEE Copyright Notice: This work has been submitted to the IEEE
for possible publication. Copyright may be transferred without
notice, after which this version may no longer be accessible.
URL: |
http://www.cs.ubc.ca/labs/se/papers/1998/UBC-CS-TR-98-10.html |
File: |
/pub/www/cs.ubc.ca/docs/labs/se/papers/1998/UBC-CS-TR-98-10.html |