Tag Archives: concepts

Designing for the gullible

There’s a nice article in todays Guardian by Charles Arthur regarding user gullibility in the face of technological systems.  In this case, he’s talking about the voice risk analysis (VRA) software used by local councils and insurance companies to detect fraud (see related article by same author), which performs fairly poorly when evaluated, but is reckoned by those bureaucrats who purchased the system to be a huge money-saver.  The way it works is this – operator receives a probability that the claimant is lying (based on “brain traces in the voice” – in reality probably changes in the fundamental frequency and pitch of the voice), and on this basis,  may elect to ask more detailed questions.

Charles Arthur makes the point that we’re naive and gullible when faced with a technological diagnosis.  And this is fair point, whether it’s the voice analysis system or a physiological computing system providing feedback that you’re happy or tired or anxious.  Why do we tend to yield to computerised diagnosis?  In my view, you can blame science for that – in our positivist culture, cold objective numbers will always trump warm subjective introspection.  The first experimental psychologist, Wilhem Wundt (1832-1920) pointed to this dichotomy when he distinguished between mediated and unmediated consciousness.  The latter is linked to introspection whereas the former demands the intervention of an instrument or technology.  If you go outside on an icy day and say to yourself “it’s cold today” – your consciousness is unmediated.  If you supplement this insight by reading a thermometer “wow, two degrees below zero”  – that’s mediated consciousness.  One is broadly true from that person’s perspective whereas the other is precise from point of view of almost anyone.

The main point of today’s article is that we tend to trust technological diagnosis even when the scientific evidence supporting system performance is flawed (as is claimed in the case of the VRA system).  Again, true enough – but in fairness, most users of the VRA didn’t get the chance to review the system evaluation data.  The staff are trained to believe the system by the company rep who sold the system and trained them how to use it.  From the perspective of the customers, insurance staff may have suddenly started to ask them a lot of detailed questions, which indicated their stories were not believed, which probably made the customers agitated and anxious, therefore raising the pitch of the voice and turning themselves from possibles to definites.  The VRA system works very well in this context because nobody really knew how it worked or even whether it worked.

What does all this mean for physiological computing?  First of all, system designers and users must accept that psychophysiological measurement will never give a perfect, isomorphic, one-to-one model of human experience.  The system builds a model of the user state, not a perfect representation.  Given this restriction, system designers must be clever in terms of providing feedback to the user.  Explicit and continuous feedback from the system is likely to undermine the credibility of the system in the eyes of the user.  Users of physiological computing systems must be sufficiently informed to understand that feedback from the system is an educated assessment.

The construction of physiological computing systems is a bridge-building exercise in some ways – a link between the nervous system and the computer chip.  Unlike similar constructions, this bridge is unlikely to ever meet in the middle.  For that to happen, the user must rely his or her gullibility to make the necessary leap of faith to close the circuit.  Unrealistic expectation will lead to eventual disappointment and disillusionment, conservative cynicism and suspicious will leave the whole physiological computing concept stranded at the starting gate – it’s up to designers to build interfaces that lead the user down the middle path.

Physiological Computing F.A.Q.

This post is out of date, please see the dedicated FAQ page for the latest revisions.

1.  What is physiological computing?

Physiological Computing is a term used to describe any computing system that uses real-time physiological data as an input stream to control the user interface.  A physiological computing system takes psychophysiological information from the user, such as heart rate or brain activity, and uses these data to make the software respond in real-time.  The development of physiological computing is a multidisciplinary field of research involving contributions from psychology, neuroscience, engineering, & computer science.

2.  How does physiological computing work?

Physiological computing systems collect physiological signals, analyse them in real-time and use this analysis as an input for computer control.  This cycle of data collection, analysis, interpretation is encapsulated within a biocybernetic control loop.

This loop describes how eye movements may be captured and translated into up/down and left/right commands for cursor control.  The same flow of information can be used to represent how changes in electrocortical activity (EEG) of the brain can be used to control the movement of an avatar in a virtual world or to activate/deactivate system automation.  With respect to an affective computing application, a change in physiological activity, such as increased blood pressure, may indicate higher levels of frustration and the system may respond with help information.  The same cycle of collection-analysis-translation-response is apparent.  Alternatively, physiological data may be logged and simply represented to the user or a medical professional; this kind of ambulatory monitoring doesn’t involve human-computer communication but is concerned with the enhancement of human-human interaction.

3.  Give me some examples.
Researchers became interested in physiological computing in the 1990s.  A group based at NASA developed a system that measured user engagement (whether the person was paying attention or not) using the electrical activity of the brain.  This measure was used to control an autopilot facility during simulated flight deck operation.  If the person was paying attention, they were allowed to use the autopilot; if attention lapsed, the autopilot was switched off – therefore, prompting the pilot into manual control in order to re-engage with the task.

Physiological computing was also used by MIT Media Lab during their investigations into affective computing.  These researchers were interested in how psychophysiological data could represent the emotional status of the user – and enable the computer to respond to user emotion.  For example by offering help if the user was irritated by the system.

Physiological computing has been applied to a range of software application and technologies, such as: robotics (making robots aware of the psychological status of their human co-workers), telemedicine (using physiological data to diagnose both health and psychological state), computer-based learning (monitoring the attention and emotions of the student) and computer games.

4.  Is the Wii an example of physiological computing?
In a way.  The Wii monitors movement and translates that movement into a control input in the same way as a mouse.  Physiological computing, as defined here, is quite different.  First of all, these systems focus on hidden psychological states rather than obvious physical movements.  Secondly, the user doesn’t have to move or do anything to provide input to a physiological computing system.  What physiological computing does is monitor “hidden” aspects of behaviour.

5.  How is physiological computing different from Brain-Computer Interfaces?
Brain-Computer Interfaces (BCI) are a category of system where the user self-regulates their physiology in order to provide input control to a computer system.  For example, a user may self-regulate activity in the EEG (electroencelogram – electrical activity of the brain) in order to move a cursor on the computer screen.  Effectively, BCIs offer an alternative to conventional input devices, such as the keyboard or mouse, which is particularly useful for people with disabilities.

There is some overlap between physiological computing and BCIs, but also some important differences.  The physiological computing approach has been compared to “wiretapping” in the sense that it monitors changes in user psychology without requiring the user to take explicit action.  Use of a BCI is associated with intentional control and requires a period of training prior to use.

6.  OK.  But the way you describe physiological computing sounds like a Biofeedback system….
There is some crossover between the approach used by physiological computing and biofeedback therapies.  But like BCI, biofeedback is designed to help people self-regulate their physiological activity, i.e. to reduce the rate of breathing for those who suffer from panic attacks.  There is some evidence that exposing a person to a physiological computing system may prompt improved self-regulation of physiology – simply because changes at the interface of a physiological computer may be meaningful to the user, i.e. if the computer does this, it means I’m stressed and need to relax.

The use of computer games to enhance biofeedback training represents the type of system that brings both physiological computing and biofeedback together.  For example, systems have been developed to treat Attention-Deficit Hyperactivity Disorder (ADHD) where children are trained to control brain activity by playing a computer game – see this link for more info.

7.  Can I buy a physiological computer?
You can buy systems that use psychophysiology for human-computer interaction.  For example, a number of headsets are on the market that have been developed by Emotiv and Neurosky to be used as an alternative to a keyboard or mouse.  At the moment, commercial systems fall mainly into the BCI application domain.  There are also a number of biofeedback games that also fall into the category of physiological computing, such as The Wild Divine .

8.  What do you need in order to create a physiological computer?
In terms of hardware, you need psychophysiological sensors (such as a GSR sensor or heart rate monitoring apparatus or EEG electrodes) that are connected to an analogue-digital converter.  These digital signals can be streamed to a computer via ethernet.  On the software side, you need an API or equivalent to access the signals and you’ll need to develop software that converts incoming physiological signals into a variable that can be used as a potential control input to an existing software package, such as a game.  Of course, none of this is straightforward because you need to understand something about psycho-physiological associations (i.e. how changes in physiology can be interpreted in psychological terms) in order to make your system work.

9.  What is it like that I have experienced?
That’s hard to say because there isn’t very much apparatus like this generally available.  If you’ve ever worn ECG sensors in either a clinical or sporting setting, you’ll know what it’s like to see your physiological activity “mirrored” in this way.  That’s one aspect.  The closest equivalent is biofeedback, where physiological data is represented as a visual display or a sound in real-time, but biofeedback is relatively specialised and used mainly to treat clinical problems.

10.  A lot of the technology involved sounds ‘medical’. Is this something hospitals would use?
The sensor technology is widely used by medical professionals to diagnose physiological problems and to monitor physiological activity.  Physiological computing represents an attempt to bring this technology to a more mainstream population by using the same monitoring technology to improve human-computer interaction.  In order to do this, it’s important to move the sensor technology from the static systems where the person is tethered by wires (as used by hospitals) to mobile, lightweight sensor apparatus that people can wear comfortably and unhindered as they work and play.

11.  Who is working on this stuff?
Physiological computing is inherently multidisciplinary.  The business of deciding which signals to use and how they represent the psychological state of the user is the domain of psychophysiology (i.e. inferring psychological significance from physiological signals).  Real-time data analysis falls into the area of signal processing that can involve professionals with backgrounds in computing, mathematics and engineering.  Designing wearable sensor apparatus capable of delivering good signals outside of the lab or clinical environment is of interest to people working in engineering and telemedicine.  Deciding how to use psychophysiological signals to drive real-time adaptation is the domain of computer scientists, particularly those interested in human-computer interaction and human factors.

12.  What can a physiological computer allow me to do that is new?
Physiological computing has the potential to offer a new scenario for how we communicate with computers.  At the moment, human-computer communication is asymmetrical with respect to information exchange.  Therefore, your computer can tell you lots of things about itself, such as: memory usage, download speed etc.  But the computer is essentially in the dark about the person on the other side of the interaction.  That’s when the computer tries to ‘second-guess’ the next thing you want to do, it normally gets it wrong, e.g. the Microsoft paperclip.  By allowing the computer to access a representation of the user state, we open up the possibility of symmetrical human-computer interaction – where ‘smart’ systems adapt themselves to user behaviour in a way that’s both intuitive and timely.  Therefore, in theory at least, we get help from the computer when we really need it.  If the computer game is boring, the software knows to make the game more challenging.  More than this, by making the computer aware of our internal state, we allow software to personalise its performance to that person with a degree of accuracy.

13.  Will these systems be able to read my mind?
Psychophysiological measures can provide an indication of a person’s emotional status.  For instance, it can measure whether you are alert or tired or whether you are relaxed or tense.  There is some evidence that it can distinguish between positive and negative mood states.  The same measures can also capture whether a person is mentally engaged with a task or not.  Whether this counts as ‘reading your mind’ or not depends on your definition.  The system would not be able to diagnose whether you were thinking about making a grilled cheese sandwich or a salad for lunch.

14.  What about the privacy of my data?
Good question.  Physiological computing inevitably involves a sustained period of monitoring the user.  This information is, by definition, highly sensitive.  An intruder could monitor the ebb and flow of user mood over a period of time.  If the intruder could access software activity as well as physiology, he or she could determine whether this web site or document elicited a certain reaction from the user or not.  Most of us regard our unexpressed emotional responses as personal and private information.  In addition, data collected via physiological computing could potentially be used to indicate medical conditions such as high blood pressure or heart arrhythmia.  Privacy and data protection are huge issues for this kind of technology.  It is important that the user exercises ultimate control with respect to: (1) what is being measured, (2) where it is being stored, and (3) who has access to that information.

15.  Where can I find out more?
There are a number of written and online sources regarding physiological computing.  Almost all have been written for an academic audience.  Here are a number of review articles:

Allanson, J. (2002, March 2002). Electrophysiologically interactive computer systems. IEEE Magazine.
Fairclough, S. H. 2009. Fundamentals of physiological computing.  Interacting with Computers, 21, 133-145.
Gilleade, K. M., Dix, A., & Allanson, J. (2005). Affective videogames and modes of affective gaming: Assist me, challenge me, emote me. Paper presented at the Proceedings of DiGRA 2005.
Picard, R. W., & Klein, J. (2002). Computers that recognise and respond to user emotion: Theoretical and practical implications. Interacting With Computers, 14, 141-169.

Manipulating vs. Mirroring

In preparing a “futuristic” talk about Physiological Computing, I’m pondering how a system might adapt itself to physiological data indicating that the user just got upset or bored or exasperated.  In the past, I’ve focused on the Gilleade et al (2005) classification where the system may help the user, challenge the user or emote the user.  In my view, whether these adaptations are overt or covert, what the system is attempting to do is manipulate the state of the user in a desired direction (generally to preserve task engagement and minimise those states that may disrupt engagement).  On the other hand, the system could simply mirror the psychological state of the user.  This mirroring approach comes in two categories.  First of all, to mimick the state of the user in order to covey empathy; for example, the RoCo project at MIT.  Alternatively, the system could simply mirror the state of the user using a biofeedback-type display in order to increase self-awareness and promote self-regulation.  The distinction between mirroring and manipulating is fairly subtle.  Adaptive responses designed to manipulate will also act as mirrors once the user cottons on to the mechanics of system design.