John's Review
Problem
This paper presents a tool that allows developers to determine who (a person or organizational group) has relevant experience with a particular code element (a single module or a subsystem).
Contributions
- Presents a tool (the Experience Browser) that aggregates experience atoms (e.g. source code deltas) for individual developers and developer groups.
- Presents a visualization of level of experience and allows a developer to perform simple queries such as which developers have experience with a particular sub system.
- Presents results from the use of the Experience Browser on two telecommunication network element projects in Europe.
Weaknesses
- Using the timing of interactions of different developers with a modification request is not a very accurate way of determining how hard it is to find an expert. This made the argument that finding experts is a critical problem less credible.
- Using the vertical and horizontal size of text to indicate the magnitude of number of experience atoms and people contributing respectively, seems a very odd way to represent this information.
- The independence model that is used to calculate Table 2 is never explained.
Questions
- The authors use change deltas as a measure of experience. What developer experience is not captured by change deltas? How could this experience be quantified?
- What are people's thoughts about the quantitative resume?
- What is the "OA&M interface"? (Section 2, first paragraph)
- What is the difference between a 'patch' and a 'bug fix'? (Section 2.4, first bullet point)
- Why would a subsystem node have to be expanded in order to display the experience for the subsystem? (Section 3.2, third paragraph)
- Do developers really spend 70% of the time communicating? (This result is not from the authors, but from a study that they are quoting. I just want to know if others find this number surprisingly high).
Belief
Overall the paper is well written and demonstrates an interesting tool for showing people's experience. I have little trouble believing that the tool is useful, and they provide data showing that the tool is useful for both newcomers and experienced project members. However, I found their use of timing of changes to the modification request to be a poor technique to show that the problem of finding experts is a critical problem.
Brett's Review
Problem
When working on a large project in a large team, it can be difficult to find an expert either in a specific area or for a specific piece of code. This paper presents a tool, the Expertise Browser (
ExB), to help automate in identifying people within a team who are experts in various aspects.
Contribution
- A tool that automates the identification of who is an expert for a specific piece of code or area
- Results of how different groups within a team, based on the level of experience with a code base, tend to seek out experts (either for a piece of code or what a specific person has done)
Weaknesses
- Taking into account the managerial structure of a project is not considered which could skew how a person works with the code base and their level of expertise (e.g., a manager telling a subordinate to do something blindly, making the manager the expert without touching the code directly)
- The authors only examine two projects which seems to be major corporate projects and do not attempt to analyze usage in a project that is not so structured in terms of members (e.g., an open source project)
- The paper also does not consider smaller development teams and how the usefulness of the tool might dwindle in such a circumstance
Questions
- How would the tool handle an automated refactoring of a code base and not have the committer be flagged as an expert because of this automated change?
- Are results skewed if someone changes a piece of code multiple times because of constant revisions stemming from the original author's lack of knowledge?
- How would the tool be used in a team with a completely (or almost completely) flat managerial hierarchy?
Belief
I have no problem believing that a tool for discovering who is an expert for a certain chunk of code is useful, especially in a large project. Having this automated would definitely simplify discovering this knowledge and from it being stale. But the author's do need to address how this tool might be used in different team structures along with code changes that stem from automated refactoring or people performing "code monkey"-like changes for someone else.
Maria's Review
Problem
Finding people with relevant expertise on a project or tool, in a large project, is a difficult task, especially in situations where a team is distributed, with no direct contact between developers. Finding experts is critical to the success of the project, but it must also be done in a timely manner.
Contributions
- The paper defines a measure of expertise based on experience atoms. Experience atoms (EAs) are elementary units of experience (here, deltas in code in CVS), and appear as a direct result of a person's activity with respect to a work product.
- The paper also presents an Expertise Browser (
ExB), which uses the above classification in its implementation. This tool is a Java Applet, which is designed to allow users to visually query and explore relationships between product and people/organizations which have the desired expertise in it. It adheres to a number of desirable properties the authors have identified.
- Some experience studies into how developers in different kinds of groups/projects use the tool.
- The authors go beyond the intended use of their tool, consulting with users to find out what other information could the tool possibly help them extract. e.g. manager overview, testing purposes, "visual resume".
Weaknesses
- No real description of how the experiment used to determine frequency with which a second person started participating in a project only in the last 10% of it (section 2.3). e.g. did these people have other projects they were working on?
- In section 3.4, they also say that they use a logarithmic/square root mapping to represent the number of EAs in a unit of interest - but wouldn't this skew the perception of a user, making the less relevant units/individuals stand out more, while the most important ones wouldn't look to be much more above the others?
- The tool only looks at people who work with code deltas. What about managers/architects who may also have expertise on the product, but it wouldn't be reflected by the tool since they don't actually work with code? The paper mentions the possibility of using artifacts other than code, but doesn't go into any details.
Questions
1) Section 3.4, they talk about representing different types of EAs with different colours, and a bar chart - not exactly sure of the usefulness of this, and what are the different types of EAs, anyway?
2) Doesn't this method rely on the developer to strictly separate the tasks they're working on by checking the code for one in before they start another one? Do developers actually work this way? What if they set off to track down a bug they've found while working on something else?
Belief
The paper presents a good classification system for expertise, and a tool that finds experts on certain project/subject matter. It seems a useful step, especially for distributed teams, although I wonder about the 100% accuracy the authors claim. The visual interface/search is a good idea, although the current version seems somewhat awkward.
Navjot's Review
Problem
Finding relevant Expertise in geographically distributed development teams or even within large teams in the same location can consume vital resources and time.
Contributions/Claims
1 empirical evidence that finding the relevant expertise is time consuming and hence, is "an important practical problem"
2 a way to quantify expertise so that a. experts may be compared to one another and b. experts with a desirable distribution of expertise may be identified
3 a web based tool with visualizations that assist in the identification of experts or expertise profiles of organizations, teams or individual developers
Weaknesses
1. The results of Section 2.3 could well be because developers need time to finish their current tasks before joining another MR.
2. The paper does not give a very clear sense of exactly what the novel contributions are. It appears - from the paper itself - that the idea of using change history to judge -if not measure - expertise is not new. In that case, the primary contribution is perhaps the way the tool organizes and presents change information.
3. Significant expertise lies with project managers who do not necessarily work on code. It would be interesting to know if the kind of expertise generally sought within product development teams requires a significant familiarity with code. Infact, the observation that the second or third developer comes in very late on an MR could just be because the kind of expertise sought most often is higher level and does not require their contribution to the MR.
Questions
1. Section 2.4 discusses domains of experience. So, do the authors have a way of distinguishing between the domains of an EA ?
Belief
I expect that
ExB consistently identifies the right experts and the interfaces/visualizations should be useful too.