I’ve written a couple of posts about the Emotiv EPOC over the years of doing the blog, from user interface issues in this post and the uncertainties surrounding the device for customers and researchers here.
The good news is that research is starting to emerge where the EPOC has been systematically compared to other devices and perhaps some uncertainties can be resolved. The first study comes from the journal Ergonomics from Ekandem et al and was published in 2012. You can read an abstract here (apologies to those without a university account who can’t get behind the paywall). These authors performed an ergonomic evaluation of both the EPOC and the NeuroSky MindWave. Data was obtained from 11 participants, each of whom wore either a Neurosky or an EPOC for 15min on different days. They concluded that there was no clear ‘winner’ from the comparison. The EPOC has 14 sites compared to the single site used by the MindWave hence it took longer to set up and required more cleaning afterwards (and more consumables). No big surprises there. It follows that signal acquisition was easier with the MindWave but the authors report that once the EPOC was connected and calibrated, signal quality was more consistent than the MindWave despite sensor placement for the former being obstructed by hair.
The MindWave works on disposable batteries whereas the EPOC is powered by a rechargeable battery. The downside of the EPOC approach is that rechargeables tend to lose power over time and become less reliable, which does impact on signal quality. When participants were asked to report comfort levels, surprisingly given the higher number of sensors involved, the EPOC was reported to be the more comfortable of the two sensors.
The second study I wanted to mention was a comparison of a medical-grade EEG apparatus (the ANT device) and the EPOC conducted by Matthieu Duvinage and colleagues. You can read an abstract of this 2013 paper here. Of course it is always important to consider the context of any comparative study and in this case, the researchers were using the devices to capture P300 in the context of a BCI-driven spell-checker. The 9 participants who took part in this study were asked to use the BCI whilst stationary or walking on a treadmill. Their comparison revealed that contrary to some claims, the EPOC does not simply measure EMG or ocular artefacts as the classification of the P300 response was above chance in the EPOCH condition. Unsurprisingly the signal-to-noise ratio was significantly lower for the EPOC compared to the medical apparatus, which did have a negative impact on classification rates for the former. The authors concluded that whilst the EPOC delivered some advantages, being convenient and low-cost, there were some practical disadvantages compared to the medical device with respect to: (1) comfort (participants found it less comfortable than the medical device), (2) placement (difficult in practice to place electrodes on 10-20 sites with sufficient confidence) and (3) degradation of electrodes over time and implications for the lifespan of the apparatus.
Hopefully we will see more of this kind of systematic study emerge with the passage of time. My own personal bugbear of commercial apparatus concerns the use of ‘black box’ algorithms for EEG analysis provided with the hardware (not just the Emotiv system, but also the software produced by NeuroSky). Whilst I appreciate this software has been constructed for the non-specialist, I do feel the whole processing loop breaks down if we don’t have confidence in the psycho-physiological inference (see section 2.1 of this 2009 paper if you want the whole sermon).
On the same theme, I’d like to direct your attention to this post from the BrainEthics blog entitled ‘Can you use the Emotiv scales for anything?’ where data shows a clear positive correlation between Frustration and Meditation scores on the Emotiv software. One would imagine being frustrated is the exact opposite of being in a meditative state, wouldn’t you? But according to these data, at least a third of the variance in one score explains the other. This kind of report more-or-less confirms my suspicions about black-box algorithms, but for the sake of balance, I should add that I wouldn’t expect a different outcome if the same analysis was done with the NeuroSky equivalent; also, I have no idea how much data is represented in this analysis (I assume it came from one person and there’s no information about the duration of the recording).
Finally, one should not confuse the reliability of the black-box algorithm with the reliability of the hardware itself. The former is a weak spot for these devices that can fundamentally undermine any design work and I’d advise people to do the necessary reading and work with the raw signals at all times.