Full citation
Reis Guerra, R. (2022). "Feeling (key)pressed : comparing the ways in which force and self-reports reveal emotion" (T). University of British Columbia.
Abstract
Interactive human-computer systems can be enriched to interpret and respond to users’ affective states using computational emotion models, which necessitates the collection of authentic and spontaneous emotion data. Popular emotion modelling frameworks rely on convenient, yet static abstractions of emotion (e.g., Ekman's basic emotions and Russell's circumplex). These abstractions often oversimplify complex emotional experiences into single emotion categories. In turn, emotion models guided by such emotion annotations leave out significant aspects of the user's true, spontaneous emotional experience. Richer representations of emotion, negotiated and understood between participants and researchers, can be created using mixed-methods labelling--assigning an emotion descriptor to a recorded segment of experience--approaches. However, resulting emotion annotations are often not ready-to-use in computational models. In this thesis, we investigate (1) ways to improve meaningfulness of self-reported emotion annotations, and (2) to understand the implicit expression of emotion in touch pressure. For the first, we propose three strategies to interpret multiple versions of self-annotated dynamic emotion through combining (multi-label classification), extracting (of alignment metrics), and resolving (of conflicts between) emotion labels. We evaluate our label-resolution strategies using the FSR EEG Emotion-Labelled (FEEL) dataset (N=16). The FEEL dataset includes brain activity and keypress force data captured from a 10-minute video of user gameplay experience, annotated with two methods of self-reporting emotion--a continuous annotation and an interview. By featuring multi-pass self-report and user-calibrated scales, the data collection protocol prioritized the capture of genuine emotion evolution. We triangulate multiple self-annotated emotion reports and evaluate classification accuracy of our three proposed label resolution strategies. For our second research question, we compare models built on keypress force and brain activity data in an effort to understand the implicit expression of emotion in touch pressure. Finally, we reflect on the trade-offs of each strategy for developing computational models of emotion. Our findings suggest that touch-based models outperform those built on brain activity, and mixed-methods emotion annotations increase self-report meaningfulness.
SPIN Authors
Year Published
2022