My research uses computation to model human bodies, behaviours, and emotions. From my first spatial interactive projects that tracked people walking and moving in space, to high-tech machine learning systems using electroencephalography (EEG) to detect emotions, I find computational sensing of human bodily expression to be a fascinating avenue for research and design.
Detecting behaviours and emotions
Starting with a custom fabric touch sensor, we developed machine learning (ML) methods to detect social touches (like patting or tickling)1. We could also detect who did the touching—so we thought, can we also detect the same person touching in different ways? Working with many different body sensors, we used ML to detect changes in emotional state via human physiology. Here are some things we found:
Breathing robots can calm people down2.
When watching scary movies, if the robot is breathing calmly along with you, you are more likely to be more calm. We built a breathing robot and used a heart rate sensor to determine whether the robot breathing impacted heart rate variability (a measure of calmness).
Sometimes touch can outperform EEG signals3
Using both EEG and force-sensitive pads on a keyboard, how hard someone presses a key can predict their emotions better than recorded brainwaves. We had people play a stressful video game while wearing an EEG headset to see whether we could differentiated stressed vs. relaxed moments in the game.
Touch and gaze can be used to differentiate emotion states4
When having people tell emotional stories to a robot pet, we used biometrics like touch, gaze, and heart rate to differentiate emotional stories.
However, even with all of these ML studies, we wondered whether the emotion detection we were doing really got at the underlying phenomenology of emotions. We were skeptical, and wrote the following critiques that found:
Dimensional emotion models need an update5
The way we currently measure emotions in HCI/HRI (usually using a 2D model) is too simple for the complexity of emotions as a phenomenon, does not account for the evolution of emotions through time, and the narrative perspective given by participants. Therefore, we need to incorporate people’s stories into the measurement process.
Measuring and conceptualizing emotions can be hard6
It may just be that emotions are so embedded in social dimensions that making a live “emotion detector” isn’t possible right now, and we can only do very simple emotions like stressed or relaxed. In which case, the field should focus on biometric event detection and visualization.
This was supported by our work on the CuddleBits, where we noticed that people’s stories about robots really impacted their evaluation of their emotions:
Complex narratives give rise to complex emotions7
Here we used complexity metrics to try to understand what made a robot’s behaviour seem to be emotional.
People couldn’t understand emotional interactions without grounded narratives8
Building a whole system for improvisational interaction with a robot and recruiting professional artists, we found that they couldn’t make a strong claim about the emotional interaction without specific grounding. If we dialed in the robot’s randomness settings correctly, we could make people feel like they were having a somewhat spontaneous interaction, but only if they had a good narrative.
Devices and Spaces
Before I started doing CS research, I worked as a media artist in print and film. I was always interested in transforming spaces and making them interactive. From relatively low-tech work using projections on bodies and windows, to more advanced object detection using custom electronic displays and AR/VR web experiences, I’ve been fascinated with how to make an interactive experience feel immersive.
Occasionally, I’ve worked as a consultant and designer for other artists (such as Terri Lynn Williams Davidson)to produce interactive projects for their exhibitions. One of my consulting gigs with Lululemon turned into a patent9, where we made a wearable device to guide people through breathing exercises (I built the electronics and did the haptic design).
- Laura Cang, Paul Bucci, Andrew Strang, Jeff Allen, Karon E. MacLean, HY Liu. Different Strokes for Different Folks: Economical Dynamic Surface Sensing: Recognition of Affective Touch and Toucher. International Conference on Multimodal Interaction (ICMI). 2015. ↩︎
- Zak Witkower, Xi Laura Cang, Paul Bucci, Karon MacLean, Jessica Tracy. (in press). Human psychophysiology is influenced by physical touch with a “breathing” robot. Emotion. 2025. ↩︎
- Xi Laura Cang, Rubia R. Guerra, Bereket Guta, Paul Bucci, Laura Rodgers, Hailey Mah, Qianqian Feng, Anuskha Agrawal, Karon E. MacLean. FEELing (key) Pressed: Implicit Touch Pressure Bests Brain Activity in Modelling Emotion Dynamics in the Space Between Stressed and Relaxed. IEEE Transactions on Haptics (ToH). 2023. ↩︎
- Laura Cang, Paul Bucci, Jussi Rantala, Karon E. MacLean. Discerning Affect from Touch and Gaze During Interaction with a Zoomorphic Robot Pet. Transactions on Affective Computing (TAFFC). 2021. ↩︎
- Paul H. Bucci, Xi Laura Cang, Haily Mah, Laura Rodgers, and Karon E. MacLean. Real Emotions Don’t Stand Still: Toward Ecologically Viable Representation of Affective Interaction. Affective Computing and Intelligent Interfaces (ACII). 2019. ↩︎
- Paul Bucci, David Marino, Ivan Beschastnikh. Affective Robots Need Therapy. Transactions on Human-Robot Interaction (THRI). 2023. ↩︎
- Paul Bucci, Lotus Zhang, Xi Laura Cang, Karon E. MacLean. Is it Happy? Behavioural and Narrative Frame Complexity Impact Perceptions of a Simple Furry Robot’s Emotions. Conference on Human Factors in Computing Systems (CHI). 2018. ↩︎
- David Marino, Paul Bucci, Oliver Schneider, Karon E. MacLean. Voodle: vocal doodling to sketch affective robot motion. Designing Interactive Systems (DIS). 2017. ↩︎
- Navjot Kailay, Ellisa Kathleen Calder, Sian Victoria Allen, Kerem Dogurga, Adrian Ka Ming Lai, Jean-Louis Iaconis, Afshin Frederick Mehin, Paul Alexander Hendrik Bucci. Systems and wearable devices for guiding breathing of a wearer and methods of using same, CA3210824A1 (Pending). 2022. ↩︎