Section BRAIN-COMPUTER INTERFACES, COGNITIVE NAVIGATION WORKSHOP AND NEUROENGINEERING
OM&P
A Human-Computer Interface Based on Electromyography and Factors Limiting its Performance
N.P.Krilova1 *, I. Kastalskiy1, V.A.Makarov1'2, S.A. Lobov1
1 Lobachevsky State University of Nizhny Novgorod, Gagarin Ave. 23, 603950 Nizhny Novgorod, Russia;
2 Instituto de Matemática Interdisciplinar, Applied Mathematics Dept., Universidad Complutense de Madrid, Avda Complutense s/n, 28040 Madrid, Spain
* Presenting e-mail: ned-k@mail.ru
Surface electromyographic (sEMG) signals represent a superposition of motor unit action potentials that can be recorded by electrodes placed on the skin. We propose a human-computer interface based on sEMG that provides a combined command-proportional control by hand gestures (see [1] for details) with two degrees of freedom for a flexible movement of a cursor on a computer screen and allows simulating "mouse" clicks. We use an artificial neural network (ANN) for processing sEMG signals and gesture recognition both for mouse clicks and gradual cursor movements. Figure1 shows the general scheme of signal processing.
Analyzing different user groups we have found statistically significant differences between male and female subjects and between physically trained and not trained people. To get a deeper insight we have introduced the synergist-antagonist coefficient (CSA) that estimates the degree of "muscle cooperation". Experimental data suggest that the success of sEMG interface depends strongly on CSA. Thus, a significant difference between physically trained (practicing sports or other activities connected with manual small motility) and not trained people can be explained by the fact that training hand muscles and related brain circuits in everyday life may lead to a more efficient motor control. The higher interface performance found for male subjects can be connected with variations in the body composition and, especially, in the content of fat tissue. Indeed, we have revealed statistically significant correlation between the classification error and the body fat.
Fig.1. Information flux in the MyoCursor system. Raw sEMG activity is mapped into cursor movements and mouse clicks in Windows OS. First RMS and MAV activity is evaluated. RMS pattern is fed into the input layer of an artificial neural network with one hidden layer. The network output from seven neurons provides two commands for mouse-like clicking and four commands for cursor movements. These are multiplied by the MAV to gain the cursor speed
Thus, both the muscle efficiency and body fat influence significantly the performance of the sEMG-inter-faces. Our data suggest that their influence is independent since we found no significant correlation between synergist-antagonist coefficient and body fat. Figure 2 shows a regression plane that allows establishing a power-law and exponential relation of the classification error with CSA and body fat, respectively.
The results also suggest that personal performance can be improved by training the user. Two subjects have shown positive dynamics in the performance and synergist-antagonist coefficient after several days of training, which included sEMG-feedback and playing a testing game with sEMG-interface.
Section BRAIN-COMPUTER INTERFACES, COGNITIVE NAVIGATION WORKSHOP AND NEUROENGINEERING
R2 = 0.379
In (CSA)
Fig.2. Dependence of classification error,ln(E/Em), (E is the error and Emis its median) on the body fat, BF, (in %) and the coefficient of synergists-antagonists, CSA, and linear regression
References
Lobov S.A., Mironov V.I., Kastalskiy I.A., Kazantsev V.B. Combined Use of Command-Proportional Control of External Robotic Devices Based on Electromyography Signals .Sovremennyetehnologii v medicine 2015; 7(4): 30-38, http://dx.doi.org/10.17691/stm2015.7A04
Fixation-Based Eye-Brain-Computer Interfaces: Approaching a Better Human-Computer Symbiosis
S.L. Shishkin1 *, Y.O. Nuzhdin1, A.G. Trofimov2, E.P. Svirin1, A.A. Fedorova1, LA. Dubynin1 and B.M. Velichkovsky1
1 NRC «Kurchatov Institute», Moscow, Russia;
2 NRNU MEPhI, Moscow, Russia.
* Presenting e-mail: sergshishkin@mail.ru
Computers are powerful tools to augment many of our intellectual abilities. However, the effectiveness of our interaction with computers depends on interfaces between them and our brains (Engelbart, 1962).
The graphical user interfaces (GUIs) and the input devices compatible with them, such as computer mice, have made the way we send commands to computers relatively fast and fluent. Can we further improve the interaction? When we fixate a GUI button or a web link with gaze and decide that we should click on them, is it really necessary to approach them, e.g., with a cursor by manually moving a mouse, and then to click the mouse button with a finger? Our gaze already indicates the position on the screen and the existing eye trackers are able to report this position. Can we design such a brain-computer interface (BCI) that could reveal our intention to click so promptly and reliably that our interaction with computers would become more effective than in the case of using conventional input devices?
Mental imagery based BCIs were already applied for supplementing the gaze based interaction with a "mental click", but the click in their operation required additional time of the order of seconds, evidently contradicting the idea of fluent control. It is much more desirable to recognize the intention to act on a certain screen position directly from brain activity patterns that accompany intention-specific fixations (Velichkovsky and Hansen, 1996).
The first attempts to implement this approach were made by Zander and colleagues (Proteak et al., 2013). They differentiated the spontaneous fixations and the fixations used to control a computer using the electroencephalogram (EEG) features. However, the fixation threshold for issuing a command in these studies was too long (1 s), so that again the interaction could hardly be considered as fluent. Moreover, the participant task was too simple compared to real-life task.
To study the issue in more complex settings, we developed a gaze controlled computer game EyeLines and recorded