A couple of years ago we organised this CHI workshop on meaningful interaction in physiological computing. As much as I felt this was an important area for investigation, I also found the topic very hard to get a handle on. I recently revisited this problem in working on a co-authored book chapter with Kiel on our forthcoming collection for Springer entitled ‘Advances in Physiological Computing’ due out next May.
On reflection, much of my difficulty revolved around the complexity of defining meaningful interaction in context. For systems like BCI or ocular control, where input control is the key function, the meaningfulness of the HCI is self-evident. If I want an avatar to move forward, I expect my BCI to translate that intention into analogous action at the interface. But biocybernetic systems, where spontaneous psychophysiology is monitored, analysed and classified, are a different story. The goal of this system is to adapt in a timely and appropriate fashion and evaluating the literal meaning of that kind of interaction is complex for a host of reasons.
The meaning of a biocybernetic interaction is influenced by three components: (1) the measures used to represent the user and the validity of those measures, (2) the sensitivity of the algorithm designed to classify the state of the user, and (3) what the adaptation actually does at the interface. For this post, I’ll talk about the measures and how this component contributes to the meaning of the interaction.
In the past, I have written about the importance of psychophysiological measures that underpin biocybernetic adaptation reflecting the subjective experience of the user. This is a logical point, because the user is at the heart of the interaction, and to an extent, the one who imbues the interaction with meaning, but there are problems with this position.
The logic of selecting measures that effectively mirror subjective experience begins to break down when one considers how psychophysiological measures relate to first-person experience. There is a well-known study of stress reactivity conducted by Cacioppo et al (1994) that illustrates this point. In this study, participants were exposed to stress (public speech) and heart rate (HR) was measured; the 44 participants differed enormously in their cardiovascular response to stress, from low reactors where HR changed by 5 bpm (beats per min) to high reactors where HR increased by as much as 30 bpm. Despite this huge difference in HR response, subjective estimates of stress were not significantly different between high- and low-reactors, which illustrates an important point – psychophysiology and subjective experience can (and will) dissociate from one another.
This makes perfect sense when you consider that subjective assessment is fundamentally defined by the limits of conscious introspective whereas psychophysiological processes are influenced by both conscious and unconscious processes. There are even some psychological states, such as sleepiness, where the ability to introspect or accurately self-monitor are adversely affected and become unreliable.
So, is it reasonable to expect a technological system performing a real-time analysis and classification of psychophysiological data to accurately mirror the subjective experience of the person? Should the meaning of the interaction be defined purely with reference to the subjective self-awareness of the user?
My feeling is that physiological computing systems ought to be capable of delivering insight, the ability of this category of technology to tell the user something he or she does not know should be a strength, not a weakness.
Meaningful interaction will depend on the availability of measures that have been validated in other ways besides subjective self-assessment. If we define meaning purely in user-centred terms, we run the risk of removing one of the most valuable assets that physiological computing has to offer.