The UK version of Wired magazine ran an article in last month’s edition (no online version available) about Emotiv and the development of the EPOC headset. Much of the article focused on the human side of the story, the writer mixed biographical details of company founders with how the ideas driving the development of the headset came together. I’ve written about Emotiv before here on a specific technical issue. I still haven’t had any direct experience of the system, but I’d like to write about the EPOC again because it’s emerging as the headset of choice for early adopters.
In this article, I’d like to discuss a number of dilemmas that are faced by both the company and their customers. These issues aren’t specific to Emotiv, they hold for other companies in the process of selling/developing hardware for physiological computing systems.
To recap, the EPOC is a headset designed to capture 12 channels of EEG from frontal (AF3, AF4, F3, F4, F7, F8), fronto-central (FC5, FC6), occipital (O1, O2), parietal (P8) and temporal sites (T7, T8). Here is a diagram of the electrode sites. It uses ‘dry’ electrode technology, though the electrodes may be dampened if the connection is poor. Emotiv are effectively selling their product to two categories of customer with very different needs: a developer who wants to build BCI apps or affective computing system and a researcher, who wants to use the headset as an ambulatory measurement unit for scientific research.
The first question any researcher will ask is how good is the signal compared to lab-based EEG technology? This question covers a number of topics – the impedance of the signal, the influence of known artifacts (like blinking), referencing options and filtering. The Wired article refers to “25,000 research hours” logged during the development of the EPOC system, part of which presumably was to develop and benchmark their nascent technology against an existing ‘gold’ standard. Here’s a quote from a member of the Emotiv staff on the comparison between the EPOC and existing EEG apparatus (taken from the discussion board, see link below).
“Our testing was in-house, against a medical grade headset which we used during our feasibility trials. Voltage and time resolution are lower and noise floor higher with the EPOC but overall spectra and signals seem well matched to a respected medical grade device. The amplifier linearity and channel phase differences are as good as any medical grade device – the resolution is limited only by the lower bit count and sampling rate.”
It sounds good but vague statements like “overall spectra and signals seem well matched to a respected medical grade device” is a bad sales pitch to attract customers from the research community. The group would pose an obvious question – why haven’t the company published full details of this testing? I’m not asking for a peer-reviewed scientific publication (although that would be nice) I’d settle for a clearly-written report published on the company website. Researchers would like to examine the methodology by which the EPOC was compared to lab apparatus. Even a researcher who really wants an EPOC will be deterred from making a purchase if the first thing they have to do ‘out of the box’ is run their own basic validation testing.
The company may respond that publishing their R&D at that level of detail would jeopardise the intellectual property that already cost them 25,000 research hours. Speaking personally, I don’t buy that – the EPOC could be presented as a ‘black box’ and the report could focus on a direct comparison of signal quality, not the underlying technology.
Aside from precision with respect to signal quality, the other thing that research customers require is flexibility – with respect to data capture settings. Researchers generally need to control settings to create different data capture protocols for different kinds of experiments. In addition, researchers need to report the data collection protocol with a degree of precision if they expect to have their work published in any self-respecting conference or journal (in order that others can replicate their work if necessary). I’d recommend reading this exchange on the Emotiv FAQ discussion board for answers about the kind of flexibility the EPOC can deliver as a research tool. It shows that the underlying data is obviously accessible and usable but you need this particular SDK to obtain it.
The other people who will buy the EPOC are application developers. On the surface, this type of person is a better target as an early adopter market. The developer wants to build some cool applications and is less interested in definition of spectra, impedance values etc. To help them do so, Emotiv have created three pieces of analysis software to deliver data analysis that will allow the developer to be up and running quickly. The ‘Affectiv’ suite “monitors plays emotional states in real-time” whilst the ‘Cognitiv’ suite “reads and interprets a player’s conscious thoughts and intent.” There is also a third suite called ‘Expressiv’ designed to “interpret player facial expression in real-time.” One thing is clear, the EPOC works by analysing a combination of EEG signals and EMG activity (i.e. electrical activity of the facial muscles). In lab work, muscle activity is considered to be an artifact but EMG can obviously function as a source of data in its own right, hence the Expressiv suite. My guess (and it is only a guess) is that Cognitiv is based on EEG from frontal cortex (note the large number of frontal sites built into the headset) whilst Affectiv probably combines overt expression of emotion from the facial muscles (e.g. smiling, frowning) with the EEG.
It is difficult in practical terms to separate the effects of muscle activity from the EEG even in the laboratory. For instance, the frontal EEG sites are very susceptible to the influence of eye movement and eye blinks. EEG researchers tend to filter out these factors but it seems clear from the discussion forum that the EPOC doesn’t do this. One part of me wonders if this is really an issue. From the demos I’ve seen of the system, the BCI component (Cognitiv) seems to work by extracting consistencies from the signal in order to create a template for a particular action or command. Perhaps this template contains a substantial amount of muscle activity – in fact, I’d be surprised if it didn’t given that muscle activity is larger than EEG and that the person is actively self-regulating their thoughts, which would normally cause them to furrow the brow. But the most important thing from the perspective of the user experience is that the EPOC system works and works reliably – if the system achieves a level of responsiveness and consistency that is acceptable to the user by tapping muscle activity as well as EEG, does it really matter where the signal comes from?
Of course, it matters to the researcher who needs to define his signals in order to put out empirical findings but not for developers. It is easier for the latter to treat the EPOC as a method of delivery in order to create BCIs and cool system adaptations, particularly for gaming software. The developer is much more concerned with the ‘what’ of the EPOC system (what can it deliver? What can my game do now that it couldn’t do before) than the ‘why’ – I think Emotiv understand this very well, hence the inclusion of the three bespoke methods of analysis to provide psychophysiology expertise to the developer.
From the perspective of the company, the developer market are the buyers who can deliver eye-catching and innovative uses of the system whilst the researchers will simply ask a lot of awkward questions. So who can blame them for focusing on one category of customer over the other. But consider this – what happens when the app that has been developed for the EPOC doesn’t work as intended? Maybe it’s got a lot of lag or doesn’t seem to activate the right kind of adaptation, maybe it doesn’t work for everyone, perhaps it only works in certain gaming scenarios and not others. Perhaps the system responds as if the user is angry when actually he claims to be very calm.
There are 1001 reasons why the EPOC system may not work. For the developer working with ‘black box’ analysis algorithms, troubleshooting will be a bitch. And it’s at this point where the absence of formal and public validation of signal quality bemoaned by the researcher becomes a serious obstacle for a developer working with the EPOC system. Worse still, aside from the company reps on the discussion boards, there’s no community of research staff with specialised knowledge who can help out.
There are at least three potential problems here. The dilemma for the developer is knowing the limits of signal quality and diagnosis provided by the system before she designs her application (rather than stumbling across them at a late stage of the design process). The issue for the researcher is how to know whether system can deliver sufficient quality and precision to be useful for scientific study. The dilemma for Emotiv is how to protect their intellectual property from potential competitors whilst building a community who can work with their system at a sufficient level of detail.
In my opinion, the company need to do more to engage the research community. A detailed database on system/signal capability is essential for troubleshooting applications under development. At the moment, the incentive to buy this system simply isn’t there for the research customer. There’s not enough detailed information on validation, the research version of the system is more expensive (due to the research API but educational discount is available) and most importantly, this community already have access to equipment that is probably a little superior in terms of data quality and much more flexible with respect to set-up.
It’s a tough sell but an important one. The long-term prospects of the EPOC and similar systems may depend on the development of a user community with right blend of high-tech concepts and scientific expertise.