From the point of view of an outsider, the utility and value of computer technology that provides emotional feedback to the human operator is questionable. The basic argument normally goes like this: even if the technology works, do I really need a machine to tell me that I’m happy or angry or calm or anxious or excited? First of all, the feedback provided by this machine would be redundant, I already have a mind/body that keeps me fully appraised of my emotional status – thank you. Secondly, if I’m angry or frustrated, do you really think I would helped in any way by a machine that drew my attention to these negative emotions, actually that would be particularly annoying. Finally, sometimes I’m not quite sure how I’m feeling or how I feel about something; feedback from a machine that says you’re happy or angry would just muddy the waters and add further confusion.
I recently read a paper by Rosalind Picard entitled “emotion research for the people, by the people.” In this article, Prof. Picard has some fun contrasting engineering and psychological perspectives on the measurement of emotion. Perhaps I’m being defensive but she seemed to have more fun poking fun at the psychologists than the engineers, but the central impasse that she identified goes something like this: engineers develop sensor apparatus that can deliver a whole range of objective data whilst psychologists have decades of experience with theoretical concepts related to emotion, so why haven’t people really benefited from their union through the field of affective computing. Prof. Picard correctly identifies a reluctance on the part of the psychologists to define concepts with sufficient precision to aid the work of the engineers. What I felt was glossed over in the paper was the other side of the problem, namely the willingness of engineers to attach emotional labels to almost any piece of psychophysiological data, usually in the context of badly-designed experiments (apologies to any engineers reading this, but I wanted to add a little balance to the debate).
This article in New Scientist prompts a short follow-up to my posts on body-blogging. The article describes a camera worn around the neck that takes a photograph every 30sec. The potential for this device to help people suffering from dementia and related problems is huge. At perhaps a more trivial level, the camera would be a useful addition to wearable physiological sensors (see previous posts on quantifying the self). If physiological data could be captured and averaged over 30 sec intervals, these data could be paired with a still image and presented as a visual timeline. This would save the body blogger from having to manually tag everything; the image also provides a nice visual recall prompt for memory and the person can speculate on how their location/activity/interactions caused changes in the body. Of course it would work as a great tool for research also – particularly for stress research in the field.
I’ve just returned from a summer school on pervasive adaptation organised under the PERADA project. As preparation for my talk, I was asked to identify some future applications for physiological computing. I drew from an idea first articulated by Ros Picard that exposure to quantifiable, objective feedback about emotional states could serve an educational purpose – to aid awareness and self-regulation. Thinking about a future time when wearable sensors are standard and wirelessly connected to phones/PDAs/laptops, I came up with the idea of body blogging. The basic notion here is that you can review a physiological data set collected over a period of time, perhaps synchronised with a diary, and identify trends that might be of interest.
The big changes, such as sleep/wake cycles, are sort of interesting (did you really have a bad night’s sleep?). If you take regular exercise, you might like to know how your body responded to that session at the gym or how many calories you burned during a run. Changes in physiology that relate to health, such as blood pressure, would be very interesting because hypertension tends to be essentially symptom-free, so the technology is providing a window on a hidden aspect of life. Perhaps I’m a little too curious about this stuff, but I’d like to know what kind of activities or contact with people tended to increase physiological markers of stress.
The central concept is to use a monitoring technology as a tool to extend self-awareness and to make changes (in lifestyle or attitude) that counteract those negative influences that are part-and-parcel of everyday life. When I proudly presented the idea, it struck me as a little “niche” and perhaps a little strange – an impression confirmed by the general apathy of the audience. On the next day, I checked my RSS to Wired and came across this article by Gary Wolf who obviously has thought much more about this kind of stuff than me. He even runs a blog in conjunction with Kevin Kelly dedicated to the topic. Encouraged by this apparent serendipity, I brought up the prospect of body blogging again during my second talk of the summer school – but my audience remained distinctly underwhelmed, even though I sensed a small number thought the term ‘body blogging’ was neat.
As part of the health psychology module I teach, I’ve come across research on allostatic load (AL). This is a concept from stress research developed by Bruce McEwen among others; in essence, AL represents the temporal characteristics of how the body responds to a stressor (i.e. the magnitude of the response, recovery time). As you may imagine, high stress reactivity with a slow recovery rate is bad for health. In fact, McEwen and Seeman linked AL to the concept of biological aging – people with higher AL have bodies that age at a faster rate than their chronological rate (and tend to suffer from poor health as a direct consequence). Here’s an article explaining the application of this approach to the effects of socioeconomic status on health. There are several markers of AL including: blood pressure, hip:waist ratio, the hormone cortisol, ratio of high-to-low density lipids (see previous link for more examples).
Which is an extremely long-winded way of wondering if body blogging could help people to track their AL and biological age – and to allow them to develop strategies and habits that minimise the impact of everyday stress on health. The current conception of AL relies heavily on measures taken from plasma samples, so perhaps that is a limiting factor. On the other hand, one problem with trying to sustain healthy lifestyle choices is the absence of clear, unequivocable feedback – so perhaps there is some hope for the concept of body blogging after all.
At this year’s E3 Microsoft, Nintendo and Sony all presented their own vision of how the player will interact with games in the future. Microsoft introduced Project Natal, a full-body hands-free game controller, which had previously been hinted upon early last month. You can check the concept video here. Sony demonstrated a wand like motion controller which works in conjunction with the Playstation Eye. And Nintendo revealed the Wii Vitality Sensor, a biosensor add-on for the Wii controller.
Sadly Nintendo didn’t reveal any specific details (or games for that matter) on how they intend to use the sensor. However from what little they did provide its likely Nintendo are going to start with stress management games similar in nature to Healing Rhythm’s Journey to Wild Divine series. Given the relax-to-win game format is very common in biofeedback based stress management, I’m suprised a game demo was not forthcoming. Oh well, E3 isn’t over as of yet, so they might reveal some more information.
Next we’ll have a look at the type of experiences the Wii Vitality can be expected to provide.