Physiological Computing meets Augmented Reality in a Museum

First of all, an apology – Kiel and I try to keep this blog ticking over, but for most of 2011, we’ve been preoccupied with a couple of large projects and getting things organised for the CHI workshop in May.  One of the “things” that led to this hiatus on the blog is a new research project funded by the EU called ARtSENSE, which is the topic of this post.

When I think of biocybernetic control and real-time adaptation, I’m generally thinking of how adaptive software works in the context of active tasks with clearly defined goals, such as driving a car safely from A to B or playing a computer game.  These kinds of active tasks cover a most of the things we do in everyday life, but not everything.  If we wind back to early 2010, I was approached to join a consortium in the process of putting together an ambitious research proposal that sought to marry together physiological computing and augmented reality.  The basic pitch was this – augmented reality (AR) is capable in principle of providing a wide range of information to the user, from voiceover and images to movies; also, AR can deliver this information input at variable rates from slow to fast.  This facility of AR is generally seen as a useful thing but it begs three important questions from the perspective of the user experience: (1) how does AR know when to ‘chip in’ and to augment the experience, (2) how does AR know when it’s dumping too much data on the poor user or when the user is simply bored by the content, and (3) what kind of pacing of information is best-suited to the information processing capacity of the individual user.  Added to these design issues surrounding AR, we also had a specific context for system usage for the proposal – visitors to museums and galleries where AR provides a good way to facilitate the experience of the visitor by enhancing digital culture.

From the perspective of physiological computing, we’re looking at whether psychophysiology captured in real-time can provide guidance for the AR system working in real-time.  This notion captures the idea of dynamic design and personalisation, which is inherent in biocybernetic adaptation; in other words, the augmentation provided by AR is driven and paced by psychophysiological responses to both exhibits (in physical reality) and the virtual content provided by AR.  In principle, this is the ultimate personalisation where system performance is tailored to the specific person in a particular state standing in a defined place at a certain time.  Sounds great in principle, but we have several layers of complexity to deal with in order to make it work.  First of all, we are monitoring people in a passive viewing scenario – therefore, physiological reactivity may be relatively low amplitude or sporadic and short-lived.  Secondly, we will have to distinguish between psychophysiological responses to the real and the virtual environment – no easy task when the two are intertwined so closely as they should be for the AR to really enhance the visitor experience.  Finally, we’re going to have to make this work in the field, in a public space where people can walk around, talk, have a cup of coffee or a glass of beer – so we’ll have to deal with a lot of potential confounds.

The project began last month so we’re now up and running.  It’s a big challenge but on the positive side, we have some good collaborators distributed across Europe and including two museums (one in Madrid, another in Paris) and a gallery space here in Liverpool.  We can also incorporate eye movements into our psychophysiological analyses of user behaviour – thanks to the iStar device constructed by our partners at the Fraunhofer IOSB.

It’s sure to be an interesting journey that will test the capabilities of the biocybernetic diagnosis/adaptation to an unprecedented level.  We’ll post regular updates of our work here.

If you’re interested in finding out more, here is the project website.

Leave a Reply

Your email address will not be published. Required fields are marked *