Full citation
Cang, X.L., "From Devices to Data and Back Again: A Tale of Computationally Modelling Affective Touch." PhD Thesis, University of British Columbia, 2024.
Abstract
Emotionally responsive Human-Robot Interaction (HRI) has captured our curiosity and imagination in fantastical ways throughout much of modern media. With touch being a valuable yet sorely missed emotion communication channel when in-person interaction is unrealistic for practical reasons, we could look to machine-mediated ways to bridge that distance. In this thesis, we investigate how we might enable machines to recognize natural and spontaneous emotional touch expressions in two parts.

First, we take a close look at ways machines engage with human emotion by examining examples of machines in three emotionally communicative roles: as a passive witness receiving and logging the emotional state of their (N=30) human counterparts, as an influential actor whose own breathing behaviour alters human fear response (N=103), and as a conduit for the transmission of emotion expression between human users (N=10 dyads and N=21 individuals).

Next, we argue that in order for devices to be truly emotionally reactive, they should address the time-varying and dynamic nature of emotional lived experience. Any computational or emotion recognition engine intended for use under realistic conditions should acknowledge that emotions will evolve over time. Machine responses may change with changing ‘emotion direction’ – acting in an encouraging way when the user is happy and getting happier vs. presenting calming behaviours for happy but getting anxious. To that end, we develop a multi-stage emotion self-reporting procedure for collecting N=16 users’ dynamic emotion expression during videogame play. From their keypress force controlling their in-game character, we benchmark individualized recognition performance for emotion direction, even finding it to exceed that of brain activity (as measured by continuous Electroencephalography (EEG)). For a proof-of-concept of a training process that generates models of true and spontaneous emotion expression evolving with the user, we then revise our protocol to be more flexible to naturalistic emotion expression. We build a custom tool to help with data collection and labelling of personal storytelling sessions and evaluate user impressions (N=5 with up to 3 stories each for a total of 10 sessions).

Finally, we conclude with actionable recommendations for advancing the training and machine recognition of naturalistic and dynamic emotion expression.
SPIN Authors
Year Published
2024

Projects