Just read a very interesting and provocative paper entitled “How emotion is made and measured” by Kirsten Boehner and colleagues. The paper provides a counter-argument to the perspective that emotion should be measured/quantified/objectified in HCI and used as part of an input to an affective computing system or evaluation methodology. Instead they propose that emotion is a dynamic interaction that is socially constructed and culturally mediated. In other words, the experience of anger is not a score of 7 on a 10-point scale that is fixed in time, but an unfolding iterative process based upon beliefs, social norms, expectations etc.
This argument seems fine in theory (to me) but difficult in practice. I get the distinct impression the authors are addressing the way emotion may be captured as part of a HCI evaluation methodology. But they go on to question the empirical approach in affective computing. In this part of the paper, they choose their examples carefully. Specifically, they focus on the category of ‘mirroring’ (see earlier post) technology wherein representations of affective states are conveyed to other humans via technology. The really interesting idea here is that emotional categories are not given by a machine intelligence (e.g. happy vs. sad vs. angry) but generated via an interactive process. For example, friends and colleagues provide the semantic categories used to classify the emotional state of the person. Or literal representations of facial expression (a web-cam shot for instance) are provided alongside a text or email to give the receiver an emotional context that can be freely interpreted. This is a very interesting approach to how an affective computing system may provide feedback to the users. Furthermore, I think once affective computing systems are widely available, the interpretive element of the software may be adapted or adjusted via an interactive process of personalisation.
So, the system provides an affective diagnosis as a first step, which is refined and developed by the person – or even by others as time goes by. Much like the way Amazon makes a series of recommendations based on your buying patterns that you can edit and tweak (if you have the time).
My big problem with this paper was that a very interesting debate was framed in terms of either/or position. So, if you use psychophysiology to index emotion, you’re disregarding the experience of the individual by using objective conceptualisations of that state. If you use self-report scales to quantify emotion, you’re rationalising an unruly process by imposing a bespoke scheme of categorisation etc. The perspective of the paper reminded me of the tiresome debate in psychology between objective/quantitative data and subjective/qualitative data about which method delivers “the truth.” I say ‘tiresome’ because I tend towards the perspectivist view that both approaches provide ‘windows’ on a phenomenon, both of which have advantages and disadvantages.
But it’s an interesting and provocative paper that gave me plenty to chew over.
FutureLab have published a discussion paper entitled “Neurofeedback: is there a potential for use in education?” It’s interesting to read a report devoted to the practical uses of neurofeedback for non-clinical populations. In short, the report covers definitions of neurofeedback & example systems (including EEG-based games like Mindball and MindFlex) as background. Then, three potential uses of neurofeedback are considered: training for sports performance, training for artistic performance and training to treat ADHD. The report doesn’t draw any firm conclusions as might be expected given the absence of systematic research programmes (in education). Aside from flagging up a number of issues (intrusion, reliability, expense), it’s obvious that we don’t know how these techniques are best employed in an educational environment, i.e. how long do students need to use them? What kind of EEG changes are important? How might neurofeedback be combined with other training techniques?
As I see it, there are a number of distinct application domains to be considered: (1) neurofeedback to shift into the desired psychological state prior to learning experience or examination (drawn from sports neurofeedback), (2) adapting educational software in real-time to keep the learner motivated (to avoid disengagement or boredom), and (3) to teach children about biological systems using biofeedback games (self-regulation exercises plus human biology practical). I’m staying with non-clinical applications here but obviously the same approaches may be applied to ADHD.
(1) and (3) above both correspond to a traditional biofeedback paradigm where the user works with the processed biological signal to develop a degree of self-regulation, that hopefully with transfer with practice. (2) is more interesting in my opinion; in this case, the software is being adapted in order to personalise and optimise the learning process for that particular individual. In other words, an efficient psychological state for learning is being created in situ by dynamic software adaptation. This approach isn’t so good for encouraging self-regulatory strategies compared to traditional biofeedback, but I believe it is more potent for optimising the learning process itself.
Research into affective computing has prompted a question from some in the HCI community about formalising the unformalisable. This is articulated in this 2005 paper by Kirsten Boehner and colleagues. In essence, the argument goes like this – given that emotion and cognition are embodied biopsychological phenomena, can we ever really “transmit” the experience to a computer? Secondly, if we try to convey emotions to a computer, don’t we just trivialise the experience by converting it into another type of cold, quantified information. Finally, hasn’t the computing community already had its fingers burned by attempts to have machines replicate cognitive phenomenon with very little results (e.g. AI research in the 80’s).
OK. The first argument seems spurious to me. Physiological computing or affective computing will never transmit an exact representation of private psychological events. That’s just setting the bar too high. What physiological computing can do is operationalise the psychological experience, i.e. to represent a psychological event or continuum in a quantified, objective fashion that should be meaningfully associated with the experience of that psychological event. As you can see, we’re getting into deep waters already here. The second argument is undeniable but I don’t understand why it is a criticism. Of course we are taking an experience that is private, personal and subjective and converting it into numbers. But that’s what the process of psychophysiological measurement is all about – moving from the realm of experience to the realm of quantified representation. After all, if you studied an ECG trace of a person in the midst of a panic attack, you wouldn’t expect to experience a panic attack yourself, would you? Besides, converting emotions into numbers is the only way a computer has to represent psychological status.
As for the last argument, I’m on unfamiliar ground here, but I hope the HCI community can learn from the past mistakes; specifically, being too literal and unrealistically ambitious. Unfortunately the affective computing debate sometimes seems to run down these well-trodden paths. I’ve read papers where researchers ponder how computers will ‘feel’ emotions or whether the whole notion of emotional computing is an oxymoron. Getting computers to represent the psychological status of users is a relative business that needs to take a couple of baby steps before we try and run.
Just to show how out of touch I am with CHI stuff, I stumbled upon a workshop entitled “evaluating affective interfaces – innovative approaches” this afternoon. Only 4 years after the actual event. Here’s a link to the web page with details of all papers.
There’s a short summary of a project called ‘Mobile Heart Health’ in the latest issue of IEEE Pervasive Computing (April-June 2009). The project was conducted at Intel Labs and uses an ambulatory ECG sensor to connect to a mobile telephone. The ECG monitors heart rate variability; if high stress is detected, the user is prompted by the phone to run through a number of relaxation therapies (controlled breathing) to provide ‘just-in-time’ stress management. It’s an interesting project, both in conceptual terms (I imagine pervasive monitoring and stress management would be particularly useful for cardiac outpatients) and in terms of interface design (how to alert the stressed user to their stressed state without making them even more stressed). Here’s a link to the magazine which includes a downloadable pdf of the article.