Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
John's Review | ||||||||
Line: 81 to 81 | ||||||||
The paper presents a good classification system for expertise, and a tool that finds experts on certain project/subject matter. It seems a useful step, especially for distributed teams, although I wonder about the 100% accuracy the authors claim. The visual interface/search is a good idea, although the current version seems somewhat awkward. | ||||||||
Added: | ||||||||
> > |
Navjot's ReviewProblemFinding relevant Expertise in geographically distributed development teams or even within large teams in the same location can consume vital resources and time.Contributions/Claims1 empirical evidence that finding the relevant expertise is time consuming and hence, is "an important practical problem" 2 a way to quantify expertise so that a. experts may be compared to one another and b. experts with a desirable distribution of expertise may be identified 3 a web based tool with visualizations that assist in the identification of experts or expertise profiles of organizations, teams or individual developersWeaknesses1. The results of Section 2.3 could well be because developers need time to finish their current tasks before joining another MR. 2. The paper does not give a very clear sense of exactly what the novel contributions are. It appears - from the paper itself - that the idea of using change history to judge -if not measure - expertise is not new. In that case, the primary contribution is perhaps the way the tool organizes and presents change information. 3. Significant expertise lies with project managers who do not necessarily work on code. It would be interesting to know if the kind of expertise generally sought within product development teams requires a significant familiarity with code. Infact, the observation that the second or third developer comes in very late on an MR could just be because the kind of expertise sought most often is higher level and does not require their contribution to the MR.Questions1. Section 2.4 discusses domains of experience. So, do the authors have a way of distinguishing between the domains of an EA ?BeliefI expect that ExB consistently identifies the right experts and the interfaces/visualizations should be useful too. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
John's Review | ||||||||
Line: 51 to 51 | ||||||||
BeliefI have no problem believing that a tool for discovering who is an expert for a certain chunk of code is useful, especially in a large project. Having this automated would definitely simplify discovering this knowledge and from it being stale. But the author's do need to address how this tool might be used in different team structures along with code changes that stem from automated refactoring or people performing "code monkey"-like changes for someone else. | ||||||||
Added: | ||||||||
> > |
Maria's ReviewProblemFinding people with relevant expertise on a project or tool, in a large project, is a difficult task, especially in situations where a team is distributed, with no direct contact between developers. Finding experts is critical to the success of the project, but it must also be done in a timely manner.Contributions- The paper defines a measure of expertise based on experience atoms. Experience atoms (EAs) are elementary units of experience (here, deltas in code in CVS), and appear as a direct result of a person's activity with respect to a work product. - The paper also presents an Expertise Browser (ExB), which uses the above classification in its implementation. This tool is a Java Applet, which is designed to allow users to visually query and explore relationships between product and people/organizations which have the desired expertise in it. It adheres to a number of desirable properties the authors have identified. - Some experience studies into how developers in different kinds of groups/projects use the tool. - The authors go beyond the intended use of their tool, consulting with users to find out what other information could the tool possibly help them extract. e.g. manager overview, testing purposes, "visual resume".Weaknesses- No real description of how the experiment used to determine frequency with which a second person started participating in a project only in the last 10% of it (section 2.3). e.g. did these people have other projects they were working on? - In section 3.4, they also say that they use a logarithmic/square root mapping to represent the number of EAs in a unit of interest - but wouldn't this skew the perception of a user, making the less relevant units/individuals stand out more, while the most important ones wouldn't look to be much more above the others? - The tool only looks at people who work with code deltas. What about managers/architects who may also have expertise on the product, but it wouldn't be reflected by the tool since they don't actually work with code? The paper mentions the possibility of using artifacts other than code, but doesn't go into any details.Questions1) Section 3.4, they talk about representing different types of EAs with different colours, and a bar chart - not exactly sure of the usefulness of this, and what are the different types of EAs, anyway? 2) Doesn't this method rely on the developer to strictly separate the tasks they're working on by checking the code for one in before they start another one? Do developers actually work this way? What if they set off to track down a bug they've found while working on something else?BeliefThe paper presents a good classification system for expertise, and a tool that finds experts on certain project/subject matter. It seems a useful step, especially for distributed teams, although I wonder about the 100% accuracy the authors claim. The visual interface/search is a good idea, although the current version seems somewhat awkward. |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
John's Review | ||||||||
Line: 31 to 31 | ||||||||
Brett's ReviewProblem | ||||||||
Changed: | ||||||||
< < | When working on a large project in a large team, it can be difficult to find an expert either in a specific area or for a specific piece of code. This paper presents a tool, the Exepertise Browser (ExB), to help automate in identifying people within a team who are experts in various aspects. | |||||||
> > | When working on a large project in a large team, it can be difficult to find an expert either in a specific area or for a specific piece of code. This paper presents a tool, the Expertise Browser (ExB), to help automate in identifying people within a team who are experts in various aspects. | |||||||
Contribution |
Line: 1 to 1 | ||||||||
---|---|---|---|---|---|---|---|---|
| ||||||||
Deleted: | ||||||||
< < | ||||||||
John's ReviewProblem | ||||||||
Line: 27 to 26 | ||||||||
BeliefOverall the paper is well written and demonstrates an interesting tool for showing people's experience. I have little trouble believing that the tool is useful, and they provide data showing that the tool is useful for both newcomers and experienced project members. However, I found their use of timing of changes to the modification request to be a poor technique to show that the problem of finding experts is a critical problem. | ||||||||
Added: | ||||||||
> > |
Brett's ReviewProblemWhen working on a large project in a large team, it can be difficult to find an expert either in a specific area or for a specific piece of code. This paper presents a tool, the Exepertise Browser (ExB), to help automate in identifying people within a team who are experts in various aspects.Contribution
Weaknesses
Questions
BeliefI have no problem believing that a tool for discovering who is an expert for a certain chunk of code is useful, especially in a large project. Having this automated would definitely simplify discovering this knowledge and from it being stale. But the author's do need to address how this tool might be used in different team structures along with code changes that stem from automated refactoring or people performing "code monkey"-like changes for someone else. |