Author Archives: Steve Fairclough

Biometrics, Game Evaluation and User XP: Approach with caution

This post represents some thoughts on the use of psychophysiology to evaluate the player experience during a computer game.  As such, it’s tangential to the main business of this blog, but it’s a topic that I think is worth some discussion and debate, as it raises a whole bunch of pertinent issues for the design of physiological computer games.

Psychophysiological methods are combined with computer games in two types of context: applied psychology research and game evaluation in a commercial context.  With respect to the former, a researcher may use a computer game as a platform to study a psychological concept, such as effects of game play on aggression or how playing against a friend or a stranger influences the experience of the player (see this recent issue of Entertainment Computing for examples).  In both cases, we’re dealing with the application of an experimental psychology methodology to an issue where the game is used as a task or virtual world within which to study behaviour.  The computer game merely represents an environment or context in which to study human behaviour.   This approach is characterised by several features: (1) comparisons are made between carefully controlled conditions, (2) statistical power is important (if you want to see your work published) so large numbers of participants are run through the design, (3) selection of participants is carefully controlled (equal number of males and females, comparative age ranges if groups are compared) and (4) counterbalanced designs, i.e. if participants play 2 different games, half of them play game 1 then game 2 whilst the other half play game 2 and then game 1; this is important because the order in which games are presented often influences the response of the participants.
Continue reading

CFP – Special Session at ICMI 2011 “BCI and Multimodality”

The deadline for submissions to this special session has been extended to May 20th

Anton Nijholt from University of Twente and Rob Jacob from Tufts University are organizing a special session at ICMI 2011 on “BCI and Multimodality”. All ICMI sessions, including the special sessions, are plenary. Hence, having a special session during the ICMI conference means that there is the opportunity to address a broad audience and make them aware of new developments and special topics.  Clearly, if we look at BCI for non-medical applications a multimodal approach is natural. We can make use of knowledge about user, task, and context. Part of this information is available in advance, part of the information becomes available on-line in addition to EEG or fNIRS measured brain activity. The intended user is not disabled, he or she can use other modalities to pass commands and preferences to the system, and the system may also have information obtained from monitoring the mental state of the user. Moreover, it may be the case that different BCI paradigms can be employed in parallel or sequentially in multimodal (or hybrid) BCI applications.
Continue reading

CFP – 2nd Workshop on Affective Brain-Computer Interfaces (aBCI)

Workshop at ACII 2011

http://hmi.ewi.utwente.nl/abci2011

http://www.acii2011.org

The second workshop on affective brain-computer interfaces will explore the advantages and limitations of using neuro-physiological signals as a modality for the automatic recognition of affective and cognitive states, and the possibilities of using this information about the user state in innovative and adaptive applications. The goal is to bring researchers from the communities of brain computer interfacing, affective computing, neuro-ergonomics, affective and cognitive neuroscience together to present state-of-the-art progress and visions on the various overlaps between those disciplines.

Continue reading

REFLECT: Biocybernetic control with multiple loops

It has been said that every cloud has a silver lining and the only positive from chronic jet lag (Kiel and I arrived in Vancouver yesterday for the CHI workshop) is that it does give you a chance to catch up with overdue tasks.  This is a post I’d been meaning to write for several weeks about my involvement in the REFLECT project.

For the last three years, our group at LJMU have been working on a collaborative project called REFLECT funded by the EU Commission under the Future and Emerging Technology Initiative.  This project was centred around the concept of “reflective software” that responds implicitly to changes in user needs and in real-time.  A variety of physiological sensors are applied to the user in order to inform this kind of reflective adaptation.  So far, this is regular fare for anyone who’s read this blog before, being a standard set-up for a biocybernetic adaptation system.

Continue reading

The Ultimate Relax to Win Dynamic

I came across an article in a Sunday newspaper a couple of weeks ago about an artist called xxxy who has created an installation using a BCI of sorts.  I’m piecing this together from what I read in the paper and what I could see on his site, but the general idea is this: person wears a portable EEG rig (I don’t recognise the model) and is placed in a harness with wires reaching up and up and up into the ceiling.  The person closes their eyes and relaxes – presumably as they enter a state of alpha augmentation, they begin to levitate courtesy of the wires.  The more that they relax or the longer they sustain that state, the higher they go.  It’s hard to tell from the video, but the person seems to be suspended around 25-30 feet in the air.

Continue reading

Physiological Computing meets Augmented Reality in a Museum

First of all, an apology – Kiel and I try to keep this blog ticking over, but for most of 2011, we’ve been preoccupied with a couple of large projects and getting things organised for the CHI workshop in May.  One of the “things” that led to this hiatus on the blog is a new research project funded by the EU called ARtSENSE, which is the topic of this post.

Continue reading

Road rage, unhealthy emotions and affective computing

From the point of view of an outsider, the utility and value of computer technology that provides emotional feedback to the human operator is questionable.  The basic argument normally goes like this: even if the technology works, do I really need a machine to tell me that I’m happy or angry or calm or anxious or excited?  First of all, the feedback provided by this machine would be redundant, I already have a mind/body that keeps me fully appraised of my emotional status – thank you.  Secondly, if I’m angry or frustrated, do you really think I would helped in any way by a machine that drew my attention to these negative emotions, actually that would be particularly annoying.  Finally, sometimes I’m not quite sure how I’m feeling or how I feel about something; feedback from a machine that says you’re happy or angry would just muddy the waters and add further confusion.

Continue reading

Studentships in Physiological Computing

Liverpool John Moores University
PhD Studentships in Applied Neuroscience/Psychophysiology
School of Natural Sciences and Psychology

EDIT: Application closed

Please quote Ref: IRC544

Applications are invited for two PhD studentships in the School of Natural Sciences and Psychology. The studentships consist of a tax-free stipend (currently £13,590 per annum for the 2010-2011 academic year) and tuition fees.

We seek candidates with a strong research background and interest in physiological computing research (http://www.physiologicalcomputing.net/) for a new research project funded by the EU. Specifically we are seeking to fund studentships in two areas associated with this project: –
Continue reading

Emotiv EPOC and the triple dilemma of early adoption

The UK version of Wired magazine ran an article in last month’s edition (no online version available) about Emotiv and the development of the EPOC headset.  Much of the article focused on the human side of the story, the writer mixed biographical details of company founders with how the ideas driving the development of the headset came together.  I’ve written about Emotiv before here on a specific technical issue.  I still haven’t had any direct experience of the system, but I’d like to write about the EPOC again because it’s emerging as the headset of choice for early adopters.

In this article, I’d like to discuss a number of dilemmas that are faced by both the company and their customers.  These issues aren’t specific to Emotiv, they hold for other companies in the process of selling/developing hardware for physiological computing systems.

Continue reading

Revised Physiological Computing FAQ

This is a short post to inform regular readers that I’ve made some changes to the FAQ document for the site (link to the left).  Normally people alter the FAQ because the types of popular questions have changed.  In our case, it is my answers to those questions that have changed in the time since I wrote my original responses – hence the need to revise the FAQ.

The original document firmly identified physiological computing with affective computing/biocybernetic adaptation.  There was even a question making a firm division between BCI technology and physiological computing.  In the revised FAQ, I’ve dumped this distinction and attempted to view BCI as part of a broad continuum of computing devices that rely on real-time physiological data for input.  This change has not been made to arrogantly subsume BCI within the physiological computing spectrum, but to reconcile perspectives from different research communities working on common measures and technologies across different application domains.  In my opinion, the distinction between research topics and application domains (including my own) are largely artificial and the advancement of this technology is best served by keeping an open mind about mash-ups and hybrid systems.

I’ve also expanded the list of indicative references to include contributions from BCI, telemedicine and adaptive automation in order to highlight the breadth of applications that are united by physiological data input.

The FAQ is written to support the naive reader, who may have stumbled across our site, but as ever, I welcome any comments or additional questions from domain experts.