For the past two weeks me and Steve have been working with Manifest.AR, a collective of AR artists, on an exhibit to be shown at FACT in February 2013. The exhibit has been commissioned as part of the ARtSENSE project which we work on.
I am one of the organisers for a workshop event at ICMI 2012 entitled “BCI Grand Challenges.” The deadline for submissions was this coming Friday (15th) but has now been extended until the 30th June. Full details are below.
With regards to the development of physiological computing systems, whether they are BCI applications or fall into the category of affective computing, there seems (to me) to be two distinct types of research community at work. The first (and oldest) community are university-based academics, like myself, doing basic research on measures, methods and prototypes with the primary aim of publishing our work in various conferences and journals. For the most part, we are a mixture of psychologists, computer scientists and engineers, many of whom have an interest in human-computer interaction. The second community formed around the availability of commercial EEG peripherals, such as the Emotiv and Neurosky. Some members of this community are academics and others are developers, I suspect many are dedicated gamers. They are looking to build applications and hacks to embellish interactive experience with a strong emphasis on commercialisation.
There are many differences between the two groups. My own academic group is ‘old-school’ in many ways, motivated by research issues and defined by the usual hierarchies associated with specialisation and rank. The newer group is more inclusive (the tag-line on the NeuroSky site is “Brain Sensors for Everyone”); they basically want to build stuff and preferably sell it.
Way back in February, Kiel and I did an event called Body Lab in conjunction with our LJMU colleagues at OpenLabs. The idea for this event originated in a series of conversations between ourselves and OpenLabs about our mutual interest in digital health. The brief of OpenLabs is to “support local creative technology companies to develop new products and services that capitalise upon global opportunities.” Their interest in our work on physiological computing was to put this idea out among their community of local creatives and digital types.
I was initially apprehensive about wisdom of this event. I’m quite used to talking about our work with others from the research community, from both the commercial and academic side – what makes me slightly uncomfortable is talking about possible implementations because I feel the available sensor apparatus and other tools are not so advanced. I was also concerned about whether doing a day-long event on this topic would pull in a sufficient number of participants – what we do has always felt very “niche” in my view. Anyhow, some smooth-talking from Jason Taylor (our OpenLabs contact) and a little publicity in the form of this short podcast convinced that we should give it our best shot.
The biocybernetic loop is the underlying mechanic behind physiological interactive systems. It describes how physiological information is to be collected from a user, analysed and subsequently translated into a response at the system interface. The most common manifestation of the biocybernetic loop can be seen in traditional biofeedback therapies, whereby the physiological signal is represented as a reflective numeric or graphic (i.e. representation changes in real-time to the signal).
In the 90′s a team at NASA published a paper that introduced a new take on the traditional biocybernetic loop format, that of biocybernetic adaptation, whereby physiological information is used to adapt the system the user is interacting with and not merely reflect it. In this instance the team had implemented a pilot simulator that used measures of EEG to control the auto-pilot status with the intent to regulate pilot attentiveness.
Dr. Alan Pope was the lead author on this paper, and has worked extensively in the field of biocybernetic systems for several decades; outside the academic community he’s probably best known for his work on biofeedback gaming therapies. To our good fortune we met Alan at a workshop we ran last year at CHI (a video of his talk can be found here) and he kindly allowed us the opportunity to delve further into his work with an interview.
So follow us across the threshold if you will and prepare to learn more about the origins of the biocybernetic loop and its use at NASA along with its future in research and industry.
The field of Physiological Computing consists of systems that use data from the human nervous system as control input to a technological system. Traditionally these systems have been grouped into two categories, those where physiological data is used as a form of input control and a second where spontaneous changes in physiology are used to monitor the psychological state of the user. The field of Brain-Computer Interfacing (BCI) traditionally conceives of BCIs as a controller for interfaces, a device which allows you to “act on” external devices as a form of input control. However, most BCIs do not provide a reliable and efficient means of input control and are difficult to learn and use relative to other available modes. We propose to change the conceptual use of “BCI as an actor” (input control) into “BCI as an intelligent sensor” (monitor). This shift of emphasis promotes the capacity of BCI to represent spontaneous changes in the state of the user in order to induce intelligent adaptation at the interface. BCIs can be increasingly used as intelligent sensors which “read” passive signals from the nervous system and infer user states to adapt human-computer, human-robot or human-human interaction (HCI, HRI, HHI). This perspective on BCIs challenges researchers to understand how information about the user state should support different types of interaction dynamics, from supporting the goals and needs of the user to conveying state information to other users. What adaptation to which user state constitutes opportune support? How does the feedback of the changing HCI and human-robot interaction affect brain signals? Many research challenges need to be tackled here.
The Quantified Self Europe presentation videos are now online. Enjoy!
I recorded my heart rate using the body blogging system and my daily mood using Moodscope for three months in 2011. I wrote about the aim of this project and some intermediary experiences in previous blogs and would now like to talk about my final impressions and what I learned from combining the two systems. I presented these results at the Quantified Self Conference in Amsterdam in November 2011.
We would like to thank all participants for taking part in the ARtSENSE Visual Aesthetic Interest Survey which is now closed. We have completed the prize draw and winners have been contacted by email.
PhD student at Liverpool John Moores University, UK
Way back in 2008, I was due to go to Florence to present at a workshop on affective BCI as part of CHI. In the event, I was ill that morning and missed the trip and the workshop. As I’d prepared the presentation, I made a podcast for sharing with the workshop attendees. I dug it out of the vaults for this post because gaming and physiological computing is such an interesting topic.
The work is dated now, but basically I’m drawing a distinction between my understanding of BCI and biocybernetic adaptation. The former is an alternative means of input control within the HCI, the latter can be used to adapt the nature of the HCI. I also argue that BCI is ideally suited certain types of game mechanics because it will not work 100% of the time. I used the TV series “Heroes” to illustrate these kinds of mechanics, which I regret in hindsight, because I totally lost all enthusiasm for that show after series 1.
The original CHI paper for this presentation is available here.