HCI Experiments
A place to put tips, suggestions, and best practices on running experiments.
Procedure
- Design and develop experiment
- Get ethics approval (http://www.rise.ubc.ca
)
- Recruit participants
- Run the experiment
- Analyze the data
- Report the data
Ethics
The following samples come from an accepted ethics application. It may be useful to reuse some of the boilerplate wording in future applications.
Recruiting
There's a lot of great information on recruiting participants here:
HCIStudyParticipantRecruitingResources.
Running the Experiment
Don't forget to have an extra consent form for participants, should they want their own copy.
Booking a room
To book a room in the Usability Lab (X727), go to this site:
http://hct.ece.ubc.ca/mrbs/admin.php
. There, you'll need to create a new experiment before you can reserve a room for it.
As of June 13, 2014, there is a javascript bug preventing easy creation of new experiments. When attempting to submit the new experiment form an error dialog pops up saying the Sponsor field is empty, despite it being filled in. If this happens, you can fix it by inspecting the Sponsor input field element in your browser and deleting it from the DOM. You'll note that there is an additional hidden input field there that still has the sponsor information, so the form should work correctly. The admins have been informed, but have decided not to fix it.
Participant receipt form
You'll need a form for people to sign off indicating that they have been paid by you for having done the experiment. A sample template to follow has been attached to this page:
receipt_generic.docx. At the completion of the experiment, this can be passed on to the program assistant for reimbursement.
Analyzing the Data
Common statistical methods used in analyzing experiment data.
Parametric tests
Running Repeated Measures ANOVA in R
Suppose your data was in a file
all_results.csv
, which had the form:
participant,slow_userCorrect,med_userCorrect,fast_userCorrect,...
1,99,50,20,...
2,103,66,32,...
...
You could run an ANOVA and post-hoc tests comparing slow vs. medium vs. fast (within subjects/repeated measures) across the participants as follows:
all_results <- read.csv("all_results.csv")
# Take a subset of the data only slow vs med vs fast user correct
smf <- all_results[c("slow_userCorrect","med_userCorrect","fast_userCorrect")]
# Create the participant column for each of the 3 conditions (used when stacked)
participant <- rep(all_results$Participant, 3)
# stack the data for repeated-measures anova (1 row per condition)
smf_stack <- stack(smf)
smf_stack[3] <- participant
rm(participant)
# Name the data
colnames(smf_stack) <- c("numCorrect", "condition", "participant")
writeLines("\nSummary of Slow/Medium/Fast\n----------------------------------------")
print(summary(smf))
# run the ANOVA
aov.out = aov(numCorrect ~ condition + Error(participant/condition), data=smf_stack)
writeLines("\n\nANOVA Results\n----------------------------------------")
print(summary(aov.out))
# run the post-hoc tests (t-Test with Holm correction)
writeLines("\n\nPost-hoc Test Results (Pairwise t-Test with Holm correction)\n----------------------------------------")
print(with(smf_stack, pairwise.t.test(numCorrect, condition, p.adjust.method="holm", paired=T)))
Non-parametric Tests
For analyzing non-parametric data, such as Likert-formatted items, consider using the tests listed below. It is also worthwhile to read through
an email from Brian Gleeson on the topic.
Repeated Measures / Within Subjects
- Friedman Test
- Wilcoxon Signed-Rank Test
Running Friedman and Wilcoxon in R
Suppose your data was in a file
all_results.csv
, which had the form:
participant,...,slow_avatarLetter,med_avatarLetter,fast_avatarLetter,...
1,...,5,4,3,...
2,...,4,4,2,...
...
Here slow_avatarLetter, med_avatarLetter, and fast_avatarLetter represent Likert-formatted question responses to the same question asked after the slow condition, medium condition, and fast condition (non-parametric repeated measures). You could run a Friedman test and Wilcoxon post-hoc tests to determine if there is significant differences between the responses (slow vs medium vs fast) as follows:
all_results <- read.csv("all_results.csv")
Mode <- function(x) {
ux <- unique(x)
ux[which.max(tabulate(match(x, ux)))]
}
# Take a subset of the data only slow vs med vs fast user correct
diff_smf <- all_results[c("slow_avatarLetter","med_avatarLetter","fast_avatarLetter")]
# Create the participant column for each of the 3 conditions (used when stacked)
participant <- rep(all_results$Participant, 3)
# stack the data for repeated-measures anova (1 row per condition)
diff_smf_stack <- stack(diff_smf)
diff_smf_stack[3] <- participant
rm(participant)
# Name the data
colnames(diff_smf_stack) <- c("numCorrect", "condition", "participant")
writeLines("\nSummary of Slow/Medium/Fast\n----------------------------------------")
print(summary(diff_smf))
diff_smf_modes <- c(Mode(diff_smf[,1]), Mode(diff_smf[,2]), Mode(diff_smf[,3]))
cat(c("Modes: ", diff_smf_modes, "\n"))
# Run the Friedman test
writeLines("\n\nFriedman Rank Sum Test Results\n----------------------------------------")
diff_smf_results <- friedman.test(numCorrect ~ condition | participant, data=diff_smf_stack)
print(diff_smf_results)
# Run the post-hoc tests (Wilcoxon with Holm correction)
writeLines("\n\nPost-hoc Test Results (Pairwise Wilcoxon Test with Holm correction)\n----------------------------------------")
print(with(diff_smf_stack, pairwise.wilcox.test(numCorrect, condition, p.adjust.method="holm", paired=T)))
Note that running this code will often produce warnings from R about ties and zero values preventing exact computation of a p-value. The tie warning happens when multiple participants in a block give the same response. The zero warning happens when a single participant gives the same response in two blocks (e.g., P1 slow = 4, P1 med = 4). It seems many people ignore these warnings and still use the results.
Between Subjects
- Kruskal-Wallis
- Mann-Whitney U
Reporting the data
TODO: Give examples of how to report the typical results. P-value and effect size. Examples for: ANOVA, t-test, Friedman, Wilcoxon, Kruskal-Wallis, and Mann-Whitney U.
--
PeterBeshai - 13 Jun 2014