Time Keeps on a Slippin

Most people I know who work in the field of physiological computing purchase off-the-shelf sensors for their research. There’s nothing specifically wrong with this, most of us are not engineers and nor do we have the time to become one as our interests lie elsewhere. At LJMU all our equipment is off-the-shelf and we have some damn fine devices which we’ve used in our work (e.g. see my review of the BM CS-5 cheststrap). However I’ve noticed we place a lot of faith (and money) in these devices to do what they say on the tin (e.g. see the issue I raised last year about the software bundled with BioHarness). Personally I like to know the limitations of any equipment I’m using, and if I find anything outside the spec I’ll try to figure out why (sometimes to my detriment as you’ll see below). Its not that I’m particularly troubled if a sensor has any defects as I don’t expect them to be perfect, the problem I have is with defects I don’t know about as they can make things, problematic to say the least. For example the first off-the-shelf sensor I ever worked with was the WaveRider Pro a 4 channel biofeedback device which had a slight problem with counting time.

The WaveRider API was originally designed for a 16 bit operating system but a 32 bit version had been made available by a 3rd party. My first affective game made use of this driver. The game used the player’s heartbeat rate to adapt the gameplay which the driver derived from the raw ECG data (i.e. I didn’t use the raw data myself to derive the player’s heartbeat rate). At this point all was good. Later on I wanted to use WaveRider in an experimental situation and thus required the use of the raw ECG data. Now the driver didn’t time stamp the data stream which is typical of most sensing devices I know of as its easier to reconstruct the time series using the device’s sampling rate which is specified by the manufacturer (e.g. if the device has a sampling rate of 128 Hz then every other sample has a time index of [N x 1/128] seconds where N is the sample no.)*.  To ensure everything was working I simulated a hardware clock to check I was getting the correct number of samples, in this case 128 per second, and this is where everything started to go wrong.

Instead of 128 samples per second I was getting a high level of variance which I could not attribute to the various levels of software an API call had to go through (perhaps a variance of a few samples I’d let slip, but not tens). To cut a long story short, I had to take apart the 32 bit driver and found a buffering bug in the API call to the raw feed. At the top level a programmer using the derived heart rate function would of been none the wiser to this loss in data**. Nor would anyone using the raw ECG data as the device driver works through an application which has its own GUI which doesn’t harbour the bug. The only way of identifying this bug was through the simulation of a hardware clock and checking that the correct number of samples had been received over a set period. At this point the design choices taken in the programming of the 32 bit driver had lead to the data loss in the raw signal. Once I fixed this I got a consistent number of samples per second, and this is where the manufacturers specification came into play. Instead of receiving 128 samples per second consistently I was getting 129. By this time I had ripped apart the API, there was nothing else on the software side I could think of that would cause me to be gaining samples or to put it another way, time. Subsequently I left the sensor to run for an hour to see if the sample rate was consistently 129 Hz which indeed it was.

The only conclusion I could come up with was that the clock in the physical device (i.e. the oscillator) was not to specification. It was a hertz out.  Now a single hertz was not really going to affect any of my signal analysis and by this time my little bug hunt had eaten up an inordinate amount of time for a relatively minor issue, but it was enough to make me wary of trusting off-the-shelf sensors at face valve. In my line of work (interactive systems and signal analysis) I need to know what, if any, issues a sensor has so I know if I can compensate for them. For example if I know the oscillator is consistently out by a few hertz’s (or even tens) then all I need do is account for the different sampling rate which is pretty much me altering a single line of code. Problems arise when I don’t have the correct information about a sensor not if its different than what it says on the tin (obviously I would like some things to be correct either that or a discount).

Over the years I’ve become accustomed with a variety of off-the-shelf sensors , some good and some bad. The majority have their own little quirks which I (and others) have had to get around. The trick for us in this line of work is not just in finding what is an acceptable issue but figuring out if there’s a issues in the first place and so the lesson of this tale of woe becomes: when selecting a sensor for your project, decide on what limitations are appropriate for the environment you wish to use that device for it makes everything so much less problematic.

* For wireless devices a timer is preferable (or a packet count) in order to compensate for any data loss in the transmission.

** Because of the way in which the WaveRider API works, calls to the derived heartbeat rate do not suffer from the buffering bug just the raw feed.

This entry was posted in Musings and tagged , on by .

About Kiel Gilleade

I'm a computer scientist with a background in the development of physiological interactive systems. I have worked on a range of physiological interactive systems, including computer games, interactive artworks and life tracking. My research interests focus on the development and evaluation of physiologically interactive technologies. I currently based in Antibes, France.

6 thoughts on “Time Keeps on a Slippin

  1. michael roberts

    Hey Stephen hope all is well. Really interesting stuff, put me in mind of emotiv’s EPOC. Would be really curious to see some controlled evaluations of signal strength, compared with Biosemi and/or dry electrode system (much improved apparently).

    Reply
  2. Kiel Gilleade Post author

    Hi Michael,

    I expect we’ll see a lot of sensor comparison papers in the near future as low cost devices enter the market. I have a few in mind myself given my ever expanding collection of wireless tech.

    Just recently finished reading a paper on portable skin conductance comparing the Q-Sensor prototype against the Flexcomp Infiniti (http://affect.media.mit.edu/pdfs/10.Poh-etal-TBME-EDA-tests.pdf). If you know what you want to measure and how its to deployed one can certaintly drop the costs.

    Reply
  3. michael roberts

    Thanks for the link Kiel, must confess that I’ve not looked at galvanic skin response in any detail, but I liked the portable and discreet wrist band mounting and capacity for real time data processing. Needs a matching head band…with some EEG sensors! But I’m biased.

    Reply
  4. Guillaume

    Thanks for the EDA link, very interesting. Most of the papers I know are using known pysiological responses to compare devices, denoising methods, etc… for instance the amplitude of the P300 wave is often use to compared EEG denoising methods. I would be curious to know if some of you know another method to evaluate the quality of EEG and other physiological signals.

    Reply
  5. Kiel Gilleade Post author

    Not sure if I understood that correctly but are you asking: what other physiological responses are used as reference points to compare different techniques + devices against? In case of cyclic physiological signals such as cardiovascular measures, the number of recognised waveforms tends to be the standard metric to evaluate a technique or device.

    On the hand if your asking what physiological signal is the most effective for a given task, then thats a whole different ball game. I’ve only seen a handful of papers on this topic and its one of the issues I wanted to discuss at the CHI2011 workshop in May. Or I could be wrong on both accounts, and I’m answering no one’s question. In which case I blame the heat in my office which has slowly been frying my brain today 🙂

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *