First International Conference on Physiological Computing

If there is a problem for academics working in the area of physiological computing, it can sometimes be a problem finding the right place to publish.  By the right place, I mean a forum that is receptive to multidisciplinary research and where you feel confident that you can reach the right audience.  Having done a lot of reviewing of physiological computing papers, I see work that is often strong on measures/methodology but weak on applications; alternatively papers tend to focus on interaction mechanics but are sometimes poor on the measurement side.  The main problem lies with the expertise of the reviewer or reviewers, who often tend to be psychologists or computer scientists and it can be difficult for authors to strike the right balance.

For this reason, I’m writing to make people aware of The First International Conference on Physiological Computing to be held in Lisbon next January.  The deadline for papers is 30th July 2013.  A selected number of papers will be published by Springer-Verlag as part of their series of Lecture Notes in Computer Science.  The journal Multimedia Tools & Applications (also published by Springer) will also select papers presented at the conference to form a special issue.  There is also a special issue of the journal Transactions in Computer-Human Interaction (TOCHI) on physiological computing that is currently open for submissions, the cfp is here and the deadline is 20th December 2013.

I should also plug a new journal from Inderscience called the International Journal of Cognitive Performance Support which has just published its first edition and would welcome contributions on brain-computer interfaces and biofeedback mechanics.

Manifest.AR: Invisible ARtaffects

First of all, apologies for our blog “sabbatical” – the important thing is that we are now back with news of our latest research collaboration involving FACT (Foundation for Art and Creative Technology) and international artists’ collective Manifest.AR.

To quickly recap, our colleagues at FACT were keen to create a new commission tapping into the use of augmented reality technology and incorporating elements of our own work on physiological computing.  Our last post (almost a year ago now to our shame) described the time we spent with Manfest.AR last summer and our show-and-tell event at FACT.  Fast-forward to the present and the Manifest.AR piece called Invisible ARtaffects opened last Thursday as part of the Turning FACT Inside Out show.

manar_exhibit

Continue reading

Manifest.AR Show and Tell

For the past two weeks me and Steve have been working with Manifest.AR, a collective of AR artists, on an exhibit to be shown at FACT in February 2013. The exhibit has been commissioned as part of the ARtSENSE project which we work on.

At the end of the two weeks a  public show and tell event was held at FACT on the current works in progress.
Continue reading

Manifest.AR Show and Tell

For the past two weeks me and Steve have been working with Manifest.AR, a collective of AR artists, on an exhibit to be shown at FACT in February 2013. The exhibit has been commissioned as part of the ARtSENSE project which we work on.

At the end of the two weeks a  public show and tell event was held at FACT on the current works in progress.
Continue reading

Troubleshooting and Mind-Reading: Developing EEG-based interaction with commercial systems

With regards to the development of physiological computing systems, whether they are BCI applications or fall into the category of affective computing, there seems (to me) to be two distinct types of research community at work. The first (and oldest) community are university-based academics, like myself, doing basic research on measures, methods and prototypes with the primary aim of publishing our work in various conferences and journals. For the most part, we are a mixture of psychologists, computer scientists and engineers, many of whom have an interest in human-computer interaction. The second community formed around the availability of commercial EEG peripherals, such as the Emotiv and Neurosky. Some members of this community are academics and others are developers, I suspect many are dedicated gamers. They are looking to build applications and hacks to embellish interactive experience with a strong emphasis on commercialisation.

There are many differences between the two groups. My own academic group is ‘old-school’ in many ways, motivated by research issues and defined by the usual hierarchies associated with specialisation and rank. The newer group is more inclusive (the tag-line on the NeuroSky site is “Brain Sensors for Everyone”); they basically want to build stuff and preferably sell it.

Continue reading

Reflections on Body Lab

Way back in February, Kiel and I did an event called Body Lab in conjunction with our LJMU colleagues at OpenLabs.  The idea for this event originated in a series of conversations between ourselves and OpenLabs about our mutual interest in digital health. The brief of OpenLabs is to “support local creative technology companies to develop new products and services that capitalise upon global opportunities.”  Their interest in our work on physiological computing was to put this idea out among their community of local creatives and digital types.

I was initially apprehensive about wisdom of this event. I’m quite used to talking about our work with others from the research community, from both the commercial and academic side – what makes me slightly uncomfortable is talking about possible implementations because I feel the available sensor apparatus and other tools are not so advanced.  I was also concerned about whether doing a day-long event on this topic would pull in a sufficient number of participants – what we do has always felt very “niche” in my view.  Anyhow, some smooth-talking from Jason Taylor (our OpenLabs contact) and a little publicity in the form of this short podcast convinced that we should give it our best shot.

Continue reading

The biocybernetic loop: A conversation with Dr Alan Pope

The biocybernetic loop is the underlying mechanic behind physiological interactive systems. It describes how physiological information is to be collected from a user, analysed and subsequently translated into a response at the system interface. The most common manifestation of the biocybernetic loop can be seen in traditional biofeedback therapies, whereby the physiological signal is represented as a reflective numeric or graphic (i.e. representation changes in real-time to the signal).

In the 90’s a team at NASA published a paper that introduced a new take on the traditional biocybernetic loop format, that of biocybernetic adaptation, whereby physiological information is used to adapt the system the user is interacting with and not merely reflect it. In this instance the team had implemented a pilot simulator that used measures of EEG to control the auto-pilot status with the intent to regulate pilot attentiveness.

Concept art for biocybernetic adaptive plane

Dr. Alan Pope was the lead author on this paper, and has worked extensively in the field of biocybernetic systems for several decades; outside the academic community he’s probably best known for his work on biofeedback gaming  therapies. To our good fortune we met Alan at a workshop we ran last year at CHI  (a video of his talk can be found here) and he kindly allowed us the opportunity to delve further into his work with an interview.

So follow us across the threshold if you will and prepare to learn more about the origins of the biocybernetic loop and its use at NASA along with its  future in research and industry.
Continue reading

CFP – Brain Computer Interfaces Grand Challenge 2012

The field of Physiological Computing consists of systems that use data from the human nervous system as control input to a technological system. Traditionally these systems have been grouped into two categories, those where physiological data is used as a form of input control and a second where spontaneous changes in physiology are used to monitor the psychological state of the user. The field of Brain-Computer Interfacing (BCI) traditionally conceives of BCIs as a controller for interfaces, a device which allows you to “act on” external devices as a form of input control. However, most BCIs do not provide a reliable and efficient means of input control and are difficult to learn and use relative to other available modes. We propose to change the conceptual use of “BCI as an actor” (input control) into “BCI as an intelligent sensor” (monitor). This shift of emphasis promotes the capacity of BCI to represent spontaneous changes in the state of the user in order to induce intelligent adaptation at the interface. BCIs can be increasingly used as intelligent sensors which “read” passive signals from the nervous system and infer user states to adapt human-computer, human-robot or human-human interaction (HCI, HRI, HHI). This perspective on BCIs challenges researchers to understand how information about the user state should support different types of interaction dynamics, from supporting the goals and needs of the user to conveying state information to other users. What adaptation to which user state constitutes opportune support? How does the feedback of the changing HCI and human-robot interaction affect brain signals? Many research challenges need to be tackled here.

Grand Challenge Website

Continue reading