Tag Archives: HCI

Blink detection and attention evaluation: the NeuroSky MindWave

Hardware for UX evaluation

As part of Project Sunflower, we took various approaches to interface evaluation. Alongside heuristic evaluation, walkthroughs and so forth, we also used various bits of hardware to support trials. This post describes setup and sample code for the NeuroSky MindWave, an inexpensive BCI (brain-computer interface – think ‘electroencephalogram’, or ‘brain wave monitoring’) that uses a single sensor placed on the forehead to measure eye blinks, ‘attention’ and ‘meditation’. These two latter variables shouldn’t be taken at (scuse the pun) face value; according to a report by Travis Ross, they’re based on a proprietary algorithm, with attention reported to relate to beta waves, hence linked to wakefulness/focus, and meditation linked to alpha waves – level of calm. Vague, admittedly, but then these devices are priced for and in large part targeted at the consumer market. If you’ve ever seen those Mattel Jedi force-trainer toys, those are based around the same technology.

Setup

Having installed the software prerequisites and drivers, the next thing is to run the Thinkgear Connector. This is an extremely helpful little piece of kit, which listens to the USB radio link and makes the sensor data available to applications. This reduces the connection step to a couple of lines of code. Since the Thinkgear Connector will return JSON upon request, the data packets are easy to parse.

Code

import sys
import json
import time
from telnetlib import Telnet

tn=Telnet('localhost',13854);
start=time.clock();

i=0;
# app registration step (in this instance unnecessary)
# tn.write('{"appName": "Example", "appKey": "9f54141b4b4c567c558d3a76cb8d715cbde03096"}');
tn.write('{"enableRawOutput": true, "format": "Json"}');

outfile="null";
if len(sys.argv)>1:
        outfile=sys.argv[len(sys.argv)-1];
        outfptr=open(outfile,'w');

# default values
eSenseDict={'attention':0, 'meditation':0};
waveDict={'lowGamma':0, 'highGamma':0, 'highAlpha':0, 'delta':0, 'highBeta':0, 'lowAlpha':0, 'lowBeta':0, 'theta':0};
signalLevel=0;

while i

(Edit: The above code is truncated. See https://github.com/etonkin/neurosky-telnet for full code)

Example output

The code as written above produces very simple CSV output, which has the benefit of being very easy to plot using something like Gnuplot:

plot "test-output.log" using 1:3 with lines, "" using 1:4 with lines, "" using 1:5 with lines

Graph of three variables: working on a language problem.

Some sensor data captured: subject was working on a language problem. Red

Sensor data. Activity: Watching TV

Sensor data captured: individual was watching TV. Red

Note: the number of blinks captured is low enough that the chances are that the MindWave is not picking up all blink events.

Project Sunflower: Time to Launch Application, Open a Book and Flip Page

Well, we now have results regarding the time taken for the Apple iPad 2, Amazon Kindle DX and Motorola XOOM to render eBooks. We installed iBooks on iPad and the Kindle App for Android on XOOM. The Google Books app can’t be installed (yet) in UK due to copyright issues. We recorded the time taken by the devices to open the app, open a book, and flip a page.

Since there is no emulator that performs exactly like the physical device, we chose to take a practical approach to measure the times. The render times were measured in two ways. One, manually, and the other using a video camera.

Manual Method

Take a stopwatch in one hand, and have the other hand tap on the device. For example, when using an iPad, what we did was to hold the stopwatch in the left hand, and tap the iPad with the right hand. Start the stopwatch precisely when the iPad is tapped, and stop when the desired action is done. This method depends a lot on the user’s reflexes and you may have your doubts about the level of precision when it comes to results. Let me tell you, the results were surprisingly accurate. Read the figures to see for yourself.

Camera Method

This is a slightly more sophisticated way of measuring, though just as simple. All you need to know is the fps (frames per second) at which the video is recorded, and a video player that can replay the video frame-per-frame. Record the desired action on-camera, and then replay the video frame-per-frame. The number of frames traversed from the start to end of the task gives a more precise time taken to complete the task than the manual method.

Results

We measured the times for six free eBooks per device. Six readings were taken per task, and the average time for each task was calculated.

iPad (Average from six readings)


Kindle DX (Average from six readings)

There is no application load time as all the books are displayed directly on the Homescreen.

XOOM (Average from six readings)


Both the methods gave fairly similar results. The differences in the times on an average are:

The standard deviations for each method are shown below:

iPad (Standard Deviation)


Kindle DX (Standard Deviation)


XOOM (Standard Deviation)


The standard deviation tables show that the camera method showed less variation from the average as compared to the manual method in all but two cases, where the difference is only 1/100th of a second. These two cases may be safely ignored.

Although both methods gave fairly similar results, it must be noted that the manual method would give varied results for every test. It is completely dependent upon user reflexes, and slow reflexes could have seriously bad results. The camera method does take up more time, however the results are more accurate and dependable  So, I’d recommend the camera method.

All the recorded times are an average, and the times may change with the length of the books proportionally. These results give us a fair idea about the various devices when it comes to render speed and page flipping. The iPad and XOOM clearly render faster than Kindle DX. However, these results are only pertaining to the device capabilities and say nothing about the user experience. What makes an eBook reader good or bad does not depend only on the render speeds, but more so on the user experience the device has to offer. A detailed usability study of the devices will be undertaken soon which will shed light on the varied user experience, and help us better understand what the user expects from an eBook reader.