Eyecatch: Simulating Visuomotor Coordination for Object Interception
ACM Transactions on Graphics (SIGGRAPH 2012)
Resource Type
Proceedings
We present a novel framework for animating human characters performing fast visually guided tasks, such as catching a ball. The main idea is to consider the coordinated dynamics of sensing and movement. Based on experimental evidence about such behaviors, we propose a generative model that constructs interception behavior online, using discrete submovements directed by uncertain visual estimates of target movement. An important aspect of this framework is that eye movements are included as well, and play a central role in coordinating movements of the head, hand, and body. We show that this framework efficiently generates plausible movements and generalizes well to novel scenarios.
Keywords
Associated Faculty
Unique ID
TR-2012-00036
Publication Date
Image
URL Alias
/research/eyecatch