Full citation
MacLean, K. E., Schneider, O., and Seifi, H., “Multisensory Haptic interactions: Understanding the Sense and Designing for it,” in Handbook of Multimodal-Multisensor Interfaces, vol. 1, S. Oviatt, B. Schuller, P. Cohen, and A. Krueger, Eds.: ACM Books, Morgan Claypool, 2017, pp. 97-142.
Abstract
Our hapticsense comprises both taction orcutaneous information obtained through receptors in the skin, and kinesthetic awareness of body forces and motions. Broadly speaking, haptic interfaces to computing systems are anything a user touches or is touched by, to control, experience or receive information from something with a computer in it. Keyboard and mouse, a physical button on a kitchen blender and the glass touchscreen on your smartphone are energetically passive haptic interfaces: no external energy is pumped into the users’ body from a powered actuator. Most readers will have encountered energetically active haptic feedbackas a vibrotactile (VT) buzz or forces in a gaming joystick, a force-feedback device in a research lab, or a physically interactive robot. Much more is possible.
When we bring touch into an interaction, we invoke characteristics that are unique or accentuated relative to other modalities. Like most powerful design resources, these traits also impose constraints. The job of a haptic designer is to understand these “superpowers” and their costs and limits, and then to deploy them for an optimally enriched experience.
Both jobs are relatively uncharted, even though engineers have been building devices with the explicit intention of haptic display for over 25 years, and psychophysicists have been studying this rich, complex sense for as many decades. What makes it so difficult? Our haptic sense is really many different senses, neurally integrated; meanwhile ,the technology of haptic display is anything but stable, with engineering challenges of a different nature than those for graphics and sound. In the last few years, technological advances from materials to robotics have opened new possibilities for the use of energetically active haptics in user interfaces, our primary focus here. Needs are exposed ata large scale by newly ubiquitous technology like “touch” screens crying out for physical feedback, and high-fidelityvirtual reality visuals that are stalled in effectiveness without force display.
When we bring touch into an interaction, we invoke characteristics that are unique or accentuated relative to other modalities. Like most powerful design resources, these traits also impose constraints. The job of a haptic designer is to understand these “superpowers” and their costs and limits, and then to deploy them for an optimally enriched experience.
Both jobs are relatively uncharted, even though engineers have been building devices with the explicit intention of haptic display for over 25 years, and psychophysicists have been studying this rich, complex sense for as many decades. What makes it so difficult? Our haptic sense is really many different senses, neurally integrated; meanwhile ,the technology of haptic display is anything but stable, with engineering challenges of a different nature than those for graphics and sound. In the last few years, technological advances from materials to robotics have opened new possibilities for the use of energetically active haptics in user interfaces, our primary focus here. Needs are exposed ata large scale by newly ubiquitous technology like “touch” screens crying out for physical feedback, and high-fidelityvirtual reality visuals that are stalled in effectiveness without force display.
SPIN Authors
Year Published
2017