Tag Archives: adaptive responses

BCI, biocybernetic control and gaming

Way back in 2008, I was due to go to Florence to present at a workshop on affective BCI as part of CHI. In the event, I was ill that morning and missed the trip and the workshop. As I’d prepared the presentation, I made a podcast for sharing with the workshop attendees. I dug it out of the vaults for this post because gaming and physiological computing is such an interesting topic.

The work is dated now, but basically I’m drawing a distinction between my understanding of BCI and biocybernetic adaptation. The former is an alternative means of input control within the HCI, the latter can be used to adapt the nature of the HCI. I also argue that BCI is ideally suited certain types of game mechanics because it will not work 100% of the time. I used the TV series “Heroes” to illustrate these kinds of mechanics, which I regret in hindsight, because I totally lost all enthusiasm for that show after series 1.

The original CHI paper for this presentation is available here.

 

Mood and Music: effects of music on driver anger

Last month I gave a presentation at the Annual Meeting of the Human Factors and Ergonomics Society held at Leeds University in the UK.  I stood on the podium and presented the work, but really the people who deserve most of the credit are Marjolein van der Zwaag (from Philips Research Laboratories) and my own PhD student at LJMU Elena Spiridon.

You can watch a podcast of the talk above.  This work was originally conducted as part of the REFLECT project at the end of 2010.  This work was inspired by earlier research on affective computing where the system makes an adaptation to alleviate a negative mood state.  The rationale here is that any such adaptation will have beneficial effects – in terms of reducing duration/intensity of negative mood, and in doing so, will mitigate any undesirable effects on behaviour or the health of the person.

Our study was concerned with the level of anger a person might experience on the road.  We know that anger causes ‘load’ on the cardiovascular system as well as undesirable behaviours associated with aggressive driver.  In our study, we subjected participants to a simulated driving task that was designed to make them angry – this is a protocol that we have developed at LJMU.  Marjolein was interested in the effects of different types of music on the cardiovascular system while the person is experiencing a negative mood state; for our study, she created four categories of music that varied in terms of high/low activation and positive/negative valence.

The study does not represent an investigation into a physiological computing system per se, but is rather a validation study to explore whether an adaptation, such as selecting a certain type of music when a person is angry, can have beneficial effects.  We’re working on a journal paper version at the moment.

REFLECT Project Promo Video

Some months ago, I wrote this post about the REFLECT project that we participated in for the last three years.  In short, the REFLECT project was concerned with research and development of three different kinds of biocybernetic loops: (1) detection of emotion, (2) diagnosis of mental workload, and (3) assessment of physical comfort.  Psychophysiological measures were used to assess (1) and (2) whilst physical movement (fidgeting) in a seated position was used for the latter.  And this was integrated into the ‘cockpit’ of a  Ferrari.

The idea behind the emotional loop was to have the music change in response to emotion (to alleviate negative mood states).  The cognitive loop would block incoming calls if the driver was in a state of high mental workload and air-filled bladders in the seat would adjust to promote physical comfort.  You can read all about the project here.  Above you’ll find a promotional video that I’ve only just discovered – the reason for my delayed response in posting this is probably vanity, the filming was over before I got to the Ferrari site in Maranello.  The upside of my absence is that you can watch the much more articulate and handsome Dick de Waard explain about the cognitive loop in the film, which was our main involvement in the project.

REFLECT: Biocybernetic control with multiple loops

It has been said that every cloud has a silver lining and the only positive from chronic jet lag (Kiel and I arrived in Vancouver yesterday for the CHI workshop) is that it does give you a chance to catch up with overdue tasks.  This is a post I’d been meaning to write for several weeks about my involvement in the REFLECT project.

For the last three years, our group at LJMU have been working on a collaborative project called REFLECT funded by the EU Commission under the Future and Emerging Technology Initiative.  This project was centred around the concept of “reflective software” that responds implicitly to changes in user needs and in real-time.  A variety of physiological sensors are applied to the user in order to inform this kind of reflective adaptation.  So far, this is regular fare for anyone who’s read this blog before, being a standard set-up for a biocybernetic adaptation system.

Continue reading

The Ultimate Relax to Win Dynamic

I came across an article in a Sunday newspaper a couple of weeks ago about an artist called xxxy who has created an installation using a BCI of sorts.  I’m piecing this together from what I read in the paper and what I could see on his site, but the general idea is this: person wears a portable EEG rig (I don’t recognise the model) and is placed in a harness with wires reaching up and up and up into the ceiling.  The person closes their eyes and relaxes – presumably as they enter a state of alpha augmentation, they begin to levitate courtesy of the wires.  The more that they relax or the longer they sustain that state, the higher they go.  It’s hard to tell from the video, but the person seems to be suspended around 25-30 feet in the air.

Continue reading

Overt vs. Covert Expression

This article in New Scientist on Project Natal got me thinking about the pros and cons of monitoring overt expression via sophisticated cameras and covert expression of psychological states via psychophysiology.  The great thing about the depth-sensing cameras (summarised nicely by one commentator in the article as like having a Wii attached to each foot, hand and your hand) is that: (1) it’s wireless technology, (2) interactions are naturalistic, and (3) it’s potentially robust (provided nobody else walks into the camera view).  Also, because it captures overt expression of body position/posture or changes in facial expression/voice tone (the second being muted as a phase two development), it measuring those signs and signals that people are usually happy to share their fellow humans – so the feel of the interaction should be as naturalistic as a regular discourse.

So why bother monitoring psychophysiology in real time to represent the user?  Let’s face it – there are big question marks over its reliability, it’s largely unproven in the field and normally involves attaching wires to the person – even if they are wearable.

But to view a  face-off between the two approaches in terms of sensor technology is missing the point.  The purpose of depth cameras is to give computer technology a set of eyes and ears to perceive & respond to overt visual or vocal cues from the user.  Whilst psychophysiological methods have been developed to capture covert changes that remain invisible to the eye.  For example, a camera system may detect a frown in response to an annoying email whereas a facial EMG recording will often detect increased activity from the corrugator or frontalis (i.e. the frown muscles) regardless of any change on the person’s face.

One approach is geared up to the detection of visible cues whereas the physiological computing approach is concerned with invisible changes in brain activity, muscle tension and autonomic activity.  That last sentence makes the physiological approach sound superior, doesn’t it?  But the truth is that both approaches do different things, and the question of which one is best depends largely on what kind of system you’re trying to build.  For example, if I’m building an application to detect high levels of frustration in response to shoot-em-up gameplay, perhaps overt behavioural cues (facial expression, vocal changes, postural changes) will detect that extreme state.  On the other hand, if my system needed to resolve low vs. medium vs. high vs. critical levels of frustration, I’d have more confidence in psychophysiological measures to provide the necessary level of fidelity.

Of course both approaches aren’t mutually exclusive and it’s easy to imagine naturalistic input control going hand-in-hand with real-time system adaptation based on psychophysiological measures.

But that’s the next step – Project Natal and similar systems will allow us to interact using naturalistic gestures, and to an extent, to construct a representation of user state based on overt behavioural cues.  In hindsight, it’s logical (sort of) that we begin on this road by extending the awareness of a computer system in a way that mimics our own perceptual apparatus.  If we supplement that technology by granting the system access to subtle, covert changes in physiology, who knows what technical possibilities will open up?

Psych-Profiling in Games

The Wired games blog has an article about the next Wii-enabled installment of survival-horror classic Silent Hill coming later in the year.  Full article is here.  A couple of paragraphs at the end about Psych-profiling the players caught my attention which I’ve pasted below.  The basic idea is that software monitors behavioural responses to the environment and adapts the gaming software accordingly.  My guess is that it’s not as subtle as the creators claim below.  IMO, here is an application crying out for the physiological computing approach.  Imagine if we could develop a player profile based on both overt behavioural responses as well as covert psychophysiological reactions to different events.  The more complexity you can work into your player profile, the more subtlety and personalisation can be achieved by software adaptation.  Of course, as usual, this kind of probing of player experience comes with a range of data protection issues.  If current events surrounding software privacy (e.g. Facebook, Phorm) are anything to go by, this is likely to be even more of a issue for future systems.

“The way that (most) games deal with interactivity can be quite simple and dull,” says Barlow. “You’re the big barbarian hero, do you want to save the maiden or not? Do you want to be good or evil? It’s slightly childish. The idea behind the psych profile is that the game is constantly monitoring what the player is doing, and it creates a very deep set of data around that, and every element of the game is changed and varied.”  Barlow and Hulett wouldn’t talk, at this early stage, about what sorts of things might change due to how you play the game, or what kind of data the game collects about you as you play. In the trailer that Konami showed, a character flashed between two very different physical appearances — that could be one of the things that changes.  The psych profile also sounds slightly sneaky. You won’t necessarily know that things have changed based on your gameplay style, says Hulett: “When you go online and talk about it with your friends, they wouldn’t know what you were talking about.”

“We’re trying to play on subconscious things. Pick up on things that you don’t know you’re giving away,” says Barlow.”

Manipulating vs. Mirroring

In preparing a “futuristic” talk about Physiological Computing, I’m pondering how a system might adapt itself to physiological data indicating that the user just got upset or bored or exasperated.  In the past, I’ve focused on the Gilleade et al (2005) classification where the system may help the user, challenge the user or emote the user.  In my view, whether these adaptations are overt or covert, what the system is attempting to do is manipulate the state of the user in a desired direction (generally to preserve task engagement and minimise those states that may disrupt engagement).  On the other hand, the system could simply mirror the psychological state of the user.  This mirroring approach comes in two categories.  First of all, to mimick the state of the user in order to covey empathy; for example, the RoCo project at MIT.  Alternatively, the system could simply mirror the state of the user using a biofeedback-type display in order to increase self-awareness and promote self-regulation.  The distinction between mirroring and manipulating is fairly subtle.  Adaptive responses designed to manipulate will also act as mirrors once the user cottons on to the mechanics of system design.