Category Archives: Musings

Thoughts, opinions about physiological computing applications both real and imaginary

In the shadow of the polygraph

I was reading this short article in The Guardian today about the failure of polygraph technologies (including fMRI versions and voice analysis) to deliver data that was sufficiently robust to be admissible in court as evidence.  Several points made in the article prompted a thought that the development of physiological computing technologies, to some extent, live in the shadow of the polygraph.

Think about it.  Both the polygraph and physiological computing aim to transform personal and private experience into quantifiable data that may be observed and assessed.  Both capture unconscious physiological changes that may signify hidden psychological motives and agendas, subconscious or otherwise – and of course, both involve the attachment of sensor apparatus.  The convergence between both technologies dictates that both are notoriously difficult to validate (hence the problems of polygraph evidence in court) – and that seems true whether we’re talking about the use of the P300 for “brain fingerprinting” or the use of ECG and respiration to capture a specific category of emotion.

Whenever I do a presentation about physiological computing, I can almost sense antipathy to the concept from some members of audience because the first thing people think about is the polygraph and the second group of thoughts that logically follow are concerns about privacy, misuse and spying.  To counter these fears, I do point out that physiological computing, whether it’s a game or a means of adapting a software agent or a brain-computer interface, has been developed for very different purposes; this technology is intended for personal use, it’s about control for the individual in the broadest sense, e.g. to control a cursor, to promote reflection and self-regulation, to make software reactive, personalised and smarter, to ensure that the data protection rights of the individual are preserved – especially if they wish to share their data with others.

But everyone knows that any signal that can be measured can be hacked, so even capturing these kinds of physiological data per se opens the door for spying and other profound invasions of privacy.

Which takes us inevitably back in the shadow of the polygraph.

I’m sure attitudes will change if the right piece of technology comes along that demonstrates the up side of physiological computing.  But if early systems don’t take data privacy seriously, as in very seriously, the public could go cold on this concept before the systems have had a chance to prove themselves in the marketplace.

For musings on a similar theme, see my previous post Designing for the Guillable.

Heart Chamber Orchestra

I came across this article about the Heart Chamber Orchestra on the Wired site last week.  The Orchestra are a group of musicians who wear ECG monitors whilst they play – the signals from the ECG feed directly into laptops and adapts the musical scores played directly and in real-time.  They also have some nice graphics generated by the ECG running in the background when they play (see clip below).  What I think is really interesting about this project is the reflexive loop set up between the ECG, the musician’s response and the adaptation of the musical score.  It really goes beyond standard biofeedback – a live feed from the ECG mutates the musical score, the player responds to technical/emotional qualities of that score, which has a second-order effect on the ECG and so on.  In the Wired article, they refer to the possibility of the audience being equipped with ECG monitors to provide another input to the loop – which is truly a mind-boggling possibility in terms of a fully-functioning biocybernetic loop.

The thing I find slightly frustrating about the article and the information contained in the project website is the lack of information about how the ECG influences the musical score.  In a straightforward way, an ECG will yield a beat-to-beat interval, which of course could generate a metronomic beat if averaged over the group.  Alternatively each individual ECG could generate its own beat, which could be superimposed over one another.  But there are dozens of ways in which ECG information could be used to adapt a musical score in a real-time.  According to the project information, there is also a composer involved doing some live manipulations of the score, but it’s hard to figure out how much of the real-time transformation is coming from him or her and how much is directly from the ECG signal.

I should also say that the Orchestra are currently competing for the FILE PRIX LUX prize and you can vote for them here

Before you do, you might want to see the orchestra in action in the clip below.

Heart chamber orchestra on vimeo

Wireless Heart Monitoring Trials

I’m currently working on a project over at LJMU (among other things) involving wireless heart monitoring. The project goes live later this month so I’ll talk more about it then but for the time here are some snapshots of my physiology in situations I don’t normally get to record with the “wired to my desktop” setup

In figures 1 and 2 each plot represents one minute of averaged heartbeat rate. Just as a side note my heartbeat rate at rest is typically in the 60-70 bpm range.

Figure 1– Sleep Cycle: Heartbeat rate from 12am to 9am on 07-02-10

Figure 2 – Sleep Cycle: Heartbeat rate from 2am to 9am on 08-02-10

Figure 3 – Travel from the Office to Home

In Figure 3 each plot represents 10 seconds of average heartbeat rate. As you can see when I leave the office I start with a HR of ~70 bpm. But then it skyrockets to ~120 bpm as I walk home. Walking being the subjective term here. I guess the monitor would say I’m more jogging than running. As I reach the train station at 17:35 my heart rate returns to its rest state until I get off the train at 17:45.

EDIT:

The wireless heart monitoring project can be found at The Body Blogger. The project involves the 24×7 recording of my physiological changes which are shared in real-time with this website and twitter. I recently did a talk at Quantified Self London about my experiences as The Body Blogger for which we now have a video.

Cross-posted at http://justkiel.blogspot.com

The Extended Nervous System

I’d like to begin the new year on a philosophical note. A lot of research in physiological computing is concerned with the practicalities of developing this technology. But what about the conceptual implications of using these systems (assuming that they are constructed and reach the marketplace)? At a fundamental level, physiological computing represents an extension of the human nervous system. This is nothing new. Our history is littered with tools and artifacts, from the plough to the internet, designed to extend the ‘reach’ of human senses capabilities. As our technology becomes more compact, we become increasingly reliant on tools to augment our cognitive capacity. This can be as trivial as using the address book on a mobile phone as a shortcut to “remembering” a friend’s number or having an electronic reminder of an imminent appointment. This kind of “scaffolded thinking” (Clark, 2004) represents a merger between a human limitation (long-term memory) and a technological solution, we’ve effectively subcontracted part of our internal cognitive store to an external silicon one. Andy Clark argues persuasively in his book that these human-machine mergers are perfectly natural consequence of human-technology co-evolution.

If we use technology to extend the human nervous system, does this also represent a natural consequence of the evolutionary trajectory that we share with machines? It is one thing to delegate information storage to a machine but granting access to the central nervous system, including the inner sanctum of the brain, represents a much more intimate category of human-machine merger.

In the case of muscle interfaces, where EMG activity or eye movements function as proxies of a mouse or touchpad input, I feel the nervous system has been extended in a modest way – gestures are simply recorded at a different place, rather than looking and pointing, you can now just look. BCIs represent a more interesting case. Many are designed to completely circumvent the conventional motor component of input control. This makes BCIs brilliant candidates for assistive technology and effective usage of a BCI device feels slightly magical – because it is the ultimate in remote control. But like muscle interfaces, all we have done is create an alternative route for human-computer input. The exciting subtext to BCI use is how the user learns to self-regulate brain activity in order to successfully operate this category of technology. The volitional control of brain activity seems like an extension of the human nervous system in my view (or to be more specific, an extension of how we control the human nervous system), albeit one that occurs as a side effect or consequence of technology use.

Technologies based on biofeedback mechanics, such as biocybernetic adaptation and ambulatory monitoring, literally extend the human nervous system by transforming a feeling/thought/experience that is private, vague and subjective into an observable representation that is public, quantified and objective. In addition, biocybernetic systems that monitor changes in physiology to trigger adaptive system responses take the concept further – these systems don’t merely represent the activity of the nervous system, they are capable of acting on the basis of this activity, completely bypassing human awareness if necessary. That prospect may alarm many but one shouldn’t be too disturbed – the autonomic nervous system routinely does hundreds of things every minute just to keep us conscious and alert – without ever asking or intruding on consciousness. Of course the process of autonomic control can run amiss, take panic attacks as one example, and it is telling that biofeedback represents one way to correct this instance of autonomic malfunction. The therapy works by making a hidden activity quantifiable and open to inspection, and in doing so, provides the means for the individual to “retrain” their own autonomic system via conscious control. This dynamic runs through those systems concerned with biocybernetic control and ambulatory monitoring. Changes at the user interface provide feedback on emotion or cognition and invite the user to extend self-awareness, and in doing so, to enhance control over their own central nervous systems. As N. Katherine Hayles puts it in her book on posthumanism: “When the body is integrated into a cybernetic circuit, modification of the circuit will necessarily modify consciousness as well. Connected to multiple feedback loops to the objects it designs, the mind is also an object of design.”

So, really what we’re talking about is extending our human nervous systems via technology and in doing so, enhancing our ability to self-regulate our human nervous systems. To slightly adapt a phrase from the autopoietic analysis of the nervous system, we are observing systems observing ourselves observing (ourselves).

It has been argued by Rosalind Picard among others that increased self-awareness and self-control of bodily states is a positive aspect of this kind of technology. In some cases, such as anger management and stress reduction, I can see clear arguments to support this position. On the other hand, I can also see potential for confusion and distress due to disembodiment (I don’t feel angry but the computer says I do – so which is me?) and invasion of privacy (I know you say you’re not angry but the computer says you are).

If we are to extend the nervous system, I believe we must also extend our conception of the self – beyond the boundaries of the skull and the skin – in order to incorporate feedback from a computer system into our strategies for self-regulation. But we should not be sucked into a simplistic conflicts by these devices. As N. Katherine Hayles points out, border crossings between humans and machines are achieved by analogy, not simple re-representation – the quantified self out there and the subjective self in here occupy different but overlapping spheres of experience. We must bear this in mind if we, as users of this technology, are to reconcile the plentitude of embodiment with the relative sparseness of biofeedback.

Categories of Physiological Computing

In my last post I articulated a concern about how the name adopted by this field may drive the research in one direction or another.  I’ve adopted the Physiological Computing (PC) label because it covers the widest range of possible systems.  Whilst the PC label is broad, generic and probably vague, it does cover a lot of different possibilities without getting into the tortured semantics of categories, sub-categories and sub- sub-categories.

I’ve defined PC as a computer system that uses real-time bio-electrical activity as input data.  At one level, moving a mouse (or a Wii) with your hand represents a form of physiological computing as do physical interfaces based on gestures – as both are ultimately based on muscle potentials.  But that seems a little pedantic.  In my view, the PC concept begins with Muscle Interfaces (e.g. eye movements) where the electrical activity of muscles is translated into gestures or movements in 2D space.  Brain-Computer Interfaces (BCI) represent a second category where the electrical activity of the cortex is converted into input control.  Biofeedback represents the ‘parent’ of this category of technology and was ultimately developed as a control device, to train the user how to manipulate the autonomic nervous system.  By contrast, systems involving biocybernetic adaptation passively monitor spontaneous activity from the central nervous system and translate these signals into real-time software adaptation – most forms of affective computing fall into this category.  Finally, we have the ‘black box’ category of ambulatory recording where physiological data are continuously recorded and reviewed at some later point in time by the user or medical personnel.

I’ve tried to capture these different categories in the diagram below.  The differences between each grouping lie on a continuum from overt observable physical activity to covert changes in psychophysiology.  Some are intended to function as explicit forms of intentional communication with continuous feedback, others are implicit with little intentionality on the part of the user.  Also, there is huge overlap between the five different categories of PC: most involve a component of biofeedback and all will eventually rely on ambulatory monitoring in order to function.  What I’ve tried to do is sketch out the territory in the most inclusive way possible.  This inclusive scheme also makes hybrid systems easier to imagine, e.g. BCI + biocybernetic adaptation, muscle interface + BCI – basically we have systems (2) and (3) designed as input control, either of which may be combined with (5) because it operates in a different way and at a different level of the HCI.

As usual, all comments welcome.

Five Categories of Physiological Computing

Five Categories of Physiological Computing

What’s in a name?

I attended a workshop earlier this year entitled aBCI (affective Brain Computer Interfaces) as part of the ACII conference in Amsterdam.  In the evening we discussed what we should call this area of research on systems that use real-time psychophysiology as an input to a computing system.  I’ve always called it ‘Physiological Computing’ but some thought this label was too vague and generic (which is a fair criticism).  Others were in favour of something that involved BCI in the title – such as Thorsten Zander‘s definitions of passive vs. active BCI.

As the debate went on, it seemed that we were discussing was an exercise in ‘branding’ as opposed to literal definition.  There’s nothing wrong with that, it’s important that nascent areas of investigation represent themselves in a way that is attractive to potential sponsors.  However, I have three main objections to the BCI label as an umbrella term for this research: (1) BCI research is identified with EEG measures, (2) BCI remains a highly specialised domain with the vast majority of research conducted on clinical groups and (3) BCI is associated with the use of psychophysiology as a substitute for input control devices.  In other words, BCI isn’t sufficiently generic to cope with: autonomic measures, real-time adaptation, muscle interfaces, health monitoring etc.

My favoured term is vague and generic, but it is very inclusive.  In my opinion, the primary obstacle facing the development of these systems is the fractured nature of the research area.  Research on these systems is multidisciplinary, involving computer science, psychology and engineering.  A number of different system concepts are out there, such as BCI vs. concepts from affective computing.  Some are intended to function as alternative forms of input control, others are designed to detect discrete psychological states.  Others use autonomic variables as opposed to EEG measures, some try to combine psychophysiology with overt changes in behaviour.  This diversity makes the area fun to work in but also makes it difficult to pin down.  At this early stage, there’s an awful lot going on and I think we need a generic label to both fully exploit synergies, and most importantly, to make sure nothing gets ruled out.

life logging + body blogging

This article in New Scientist prompts a short follow-up to my posts on body-blogging. The article describes a camera worn around the neck that takes a photograph every 30sec. The potential for this device to help people suffering from dementia and related problems is huge. At perhaps a more trivial level, the camera would be a useful addition to wearable physiological sensors (see previous posts on quantifying the self). If physiological data could be captured and averaged over 30 sec intervals, these data could be paired with a still image and presented as a visual timeline. This would save the body blogger from having to manually tag everything; the image also provides a nice visual recall prompt for memory and the person can speculate on how their location/activity/interactions caused changes in the body. Of course it would work as a great tool for research also – particularly for stress research in the field.

quantifying the self (again)

I just watched this cool presentation about blogging self-report data on mood/lifestyle and looking at the relationship with health. My interest in this topic is tied up in the concept of body-blogging (i.e. recording physiological data using ambulatory systems) – see earlier post. What’s nice about the idea of body-blogging is that it’s implicit and doesn’t require you to do anything extra, such as completing mood ratings or other self-reports. The fairly major downside to this approach comes in two varieties: (1) the technology to do it easily is still fairly expensive and associated software is cumbersome to use (not that it’s bad software, it’s just designed for medical or research purposes), and (2) continuous physiology generates a huge amount of data.

For the individual, this concept of self-tracking and self-quantifying is linked to increased self-awareness (to learn how your body is influenced by everyday events), and with self-awareness comes new strategies for self-regulation to minimise negative or harmful changes. My feeling is that there are certain times in your life (e.g. following a serious illness or medical procedure) when we have a strong motivation to quantify and monitor our physiological patterns. However, I see a risk of that strategy tipping a person over into hypochondria if they feel particularly vulnerable.

At the level of the group, it’s fascinating to see the seeds of a crowdsourcing idea in the above presentation. Therefore, people self-log over a period and share this information anonymously on the web. This activity creates a database that everyone can access and analyse, participants and researchers alike. I wonder if people would be as comfortable sharing heart rate or blood pressure data – provided it was submitted anonymously, I don’t see why not.

There’s enormous potential here for wearable physiological sensors to be combined with self-reported logging and both data sets to be combined online. Obviously there is a fidelity mismatch here; physiological data can be recorded in milliseconds whilst self-report data is recorded in hours. But some clever software could be constructed in order to aggregate the physiology and put both data-sets on the same time frame. The benefit of doing this for both researcher and participant is to explore the connections between (previously) unseen patterns of physiological responses and the experience of the individual/group/population.

For anyone who’s interested, here’s a link to another blog site containing a report from an event that focused on self-tracking technologies.

Audience Participation

A paper just published in IJHCS by Stevens et al (link to abstract) describes how members of the audience use a PDA to register their emotional responses in real-time during a number of dance performances.    It’s an interesting approach to studying how emotional responses may converge and diverge during particular sections of a performance.  The PDA displays a two-dimensional space with valence and activation representing emotion (i.e. Russell’s circumplex model).  The participants were required to indicate their position within this space with a stylus at rate of two readings per second!

That sounds like a lot of work, so how about a physiological computing version where valence and activation are operationalised with real-time psychophysiology, e.g. a corrugator/zygomaticus reading for valence and blood pressure/GSR/heart rate for activation.  Provided that the person remained fairly stationary, it could deliver the same kind of data with a higher level of fidelity and without the onerous requirement to do self-reports.

This system concept could really take off if you had 100s of audience members wired up for a theatre performance and live feedback of the ‘hive’ emotion represented on stage.  This could be a backdrop projection or colour/intensity of stage lighting working as an en-masse biofeedback system.  A clever installation could allow the performers to interact with the emotional representation of the audience – to check out the audience response or coerce certain responses.

Or perhaps this has already been done somewhere and I missed it.

Emotional HCI

Just read a very interesting and provocative paper entitled “How emotion is made and measured” by Kirsten Boehner and colleagues.  The paper provides a counter-argument to the perspective that emotion should be measured/quantified/objectified in HCI and used as part of an input to an affective computing system or evaluation methodology.  Instead they propose that emotion is a dynamic interaction that is socially constructed and culturally mediated.  In other words, the experience of anger is not a score of 7 on a 10-point scale that is fixed in time, but an unfolding iterative process based upon beliefs, social norms, expectations etc.

This argument seems fine in theory (to me) but difficult in practice.  I get the distinct impression the authors are addressing the way emotion may be captured as part of a HCI evaluation methodology.  But they go on to question the empirical approach in affective computing.  In this part of the paper, they choose their examples carefully.  Specifically, they focus on the category of ‘mirroring’ (see earlier post) technology wherein representations of affective states are conveyed to other humans via technology.  The really interesting idea here is that emotional categories are not given by a machine intelligence (e.g. happy vs. sad vs. angry) but generated via an interactive process.  For example, friends and colleagues provide the semantic categories used to classify the emotional state of the person.  Or literal representations of facial expression (a web-cam shot for instance) are provided alongside a text or email to give the receiver an emotional context that can be freely interpreted.  This is a very interesting approach to how an affective computing system may provide feedback to the users.  Furthermore, I think once affective computing systems are widely available, the interpretive element of the software may be adapted or adjusted via an interactive process of personalisation.

So, the system provides an affective diagnosis as a first step, which is refined and developed by the person – or even by others as time goes by.  Much like the way Amazon makes a series of recommendations based on your buying patterns that you can edit and tweak (if you have the time).

My big problem with this paper was that a very interesting debate was framed in terms of either/or position.  So, if you use psychophysiology to index emotion, you’re disregarding the experience of the individual by using objective conceptualisations of that state.  If you use self-report scales to quantify emotion, you’re rationalising an unruly process by imposing a bespoke scheme of categorisation etc.   The perspective of the paper reminded me of the tiresome debate in psychology between objective/quantitative data and subjective/qualitative data about which method delivers “the truth.”  I say ‘tiresome’ because I tend towards the perspectivist view that both approaches provide ‘windows’ on a phenomenon, both of which have advantages and disadvantages.

But it’s an interesting and provocative paper that gave me plenty to chew over.