Научная статья на тему 'Improving eye-brain-computer interface performance by using electroencephalogram frequency components'

Improving eye-brain-computer interface performance by using electroencephalogram frequency components Текст научной статьи по специальности «Медицинские технологии»

CC BY
373
76
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
brain-computer interface / eye-brain-computer interface / electroencephalogram / EEG / gaze-based control / control gaze fixation / eye tracking / video-oculography / classification / wavelets / интерфейс мозг–компьютер / интерфейс глаз–мозг–компьютер / электроэнцефалограмма / ЭЭГ / управление с помощью взгляда / управляющая фиксация взгляда / айтрекинг / видеоокулография / классификация / вейвлеты

Аннотация научной статьи по медицинским технологиям, автор научной работы — Sergey Shishkin, Kozyrskiy B.L., Trofimov A.G., Nuzhdin Y.O., Fedorova A.A.

Eye-brain-computer interfaces (EBCIs) could combine the advantages of eye tracking systems used for operating technical devices and brain-computer interfaces. Such systems are intended for both patients with various motor impairments and healthy individuals. The effectiveness of EBCIs is largely dependent on their ability to detect the user’s intent to give a command on the encephalogram (EEG) recorded during gaze fixation, that is, just within hundreds of milliseconds. These strict requirements necessitate a full use of data contained in EEG for more accurate classification of gaze fixations as spontaneous and “control”. This work describes our attempt to use for classification not only amplitude statistical features, but also wavelet features specific to oscillatory EEG components within the interval of 50-500 ms from gaze fixation onset. Integral index of classification accuracy AUC significantly depended on the feature set, reaching the highest value (0.75, average over the group of 8 participants) for the combined amplitude and wavelet set. We believe that further improvement of this method will facilitate the practical application of EBCIs.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Улучшение работы интерфейса глаз–мозг–компьютер при использовании частотных компонентов электроэнцефалограммы

Интерфейсы глаз–мозг–компьютер (ИГМК) могли бы совместить в себе достоинства айтрекинговых систем управления техническими устройствами и интерфейсов мозг–компьютер. Такие системы предназначены как для пациентов с различными моторными нарушениями, так и здоровых людей. Эффективность ИГМК во многом определяется возможностью распознать намерение пользователя отдать команду по электроэнцефалограмме (ЭЭГ), регистрируемой во время фиксации взгляда, т. е. в течение всего сотен миллисекунд. Эти жесткие требования диктуют необходимость добиваться как можно более полного использования заключенной в ЭЭГ информации для повышения точности классификации фиксаций взгляда на «управляющие» и спонтанные. В настоящей работе предприняли попытку использовать для классификации не только амплитудные статистические признаки, но также вейвлетные признаки, характеризующие осцилляторные компоненты ЭЭГ в интервале 50…500 мс относительно начала фиксации взгляда. Значения интегрального показателя точности классификации AUC при этом значимо выросли и составили 0,75 в среднем по группе из 8 человек. Предполагается, что дальнейшее совершенствование методики позволит превратить ИГМК в практически полезную технологию.

Текст научной работы на тему «Improving eye-brain-computer interface performance by using electroencephalogram frequency components»

improving eye-brain-computer interface performance by using electroencephalogram frequency components

Shishkin SL1 Kozyrskiy BL1>3, Trofimov AG1,3, Nuzhdin YO1, Fedorova AA1, Svirin EP1, Velichkovsky BM2

1 Department of Neurocognitive Technologies, Kurchatov Complex of NBICS Technologies, National Research Centre Kurchatov Institute, Moscow, Russia

2 Kurchatov Complex of NBICS Technologies,

National Research Centre Kurchatov Institute, Moscow, Russia

3 Faculty of Cybernetics and Information Security,

National Research Nuclear University MEPhI, Moscow, Russia

Eye-brain-computer interfaces (EBCIs) could combine the advantages of eye tracking systems used for operating technical devices and brain-computer interfaces. Such systems are intended for both patients with various motor impairments and healthy individuals. The effectiveness of EBCIs is largely dependent on their ability to detect the user's intent to give a command on the encephalogram (EEG) recorded during gaze fixation, that is, just within hundreds of milliseconds. These strict requirements necessitate a full use of data contained in EEG for more accurate classification of gaze fixations as spontaneous and "control". This work describes our attempt to use for classification not only amplitude statistical features, but also wavelet features specific to oscillatory EEG components within the interval of 50-500 ms from gaze fixation onset. Integral index of classification accuracy AUC significantly depended on the feature set, reaching the highest value (0.75, average over the group of 8 participants) for the combined amplitude and wavelet set. We believe that further improvement of this method will facilitate the practical application of EBCIs.

Keywords: brain-computer interface, eye-brain-computer interface, electroencephalogram, EEG, gaze-based control, control gaze fixation, eye tracking, video-oculography, classification, wavelets

Funding: this work was partially supported by the Russian Science Foundation, grant no. 14-28-00234 (acquisition and preprocessing of experimental data), and the Russian Foundation for Basic Research, grant no. 15-29-01344 (evaluation of wavelet features significance for classification).

[X] Correspondence should be addressed: Sergey Shishkin

pl. Akademika Kurchatova, d. 1, Moscow, Russia, 123182; [email protected]

Received: 08.04.2016 Accepted: 15.04.2016

улучшение работы интерфейса глаз-мозг-компьютер при использовании частотных компонентов электроэнцефалограммы

С. Л. Шишкин1^, Б. Л. Козырский1,3, А. Г. Трофимов1,3, Ю. О. Нуждин1, А. А. Федорова1, Е. П. Свирин1, Б. М. Величковский2

1 Отдел нейрокогнитивных технологий, Курчатовский комплекс НБИКС-технологий, Национальный исследовательский центр «Курчатовский институт», Москва

2 Курчатовский комплекс НБИКС-технологий,

Национальный исследовательский центр «Курчатовский институт», Москва

3 Факультет кибернетики и информационной безопасности, Национальный исследовательский ядерный университет «МИФИ», Москва

Интерфейсы глаз-мозг-компьютер (ИГМК) могли бы совместить в себе достоинства айтрекинговых систем управления техническими устройствами и интерфейсов мозг-компьютер. Такие системы предназначены как для пациентов с различными моторными нарушениями, так и здоровых людей. Эффективность ИГМК во многом определяется возможностью распознать намерение пользователя отдать команду по электроэнцефалограмме (ЭЭГ), регистрируемой во время фиксации взгляда, т. е. в течение всего сотен миллисекунд. Эти жесткие требования диктуют необходимость добиваться как можно более полного использования заключенной в ЭЭГ информации для повышения точности классификации фиксаций взгляда на «управляющие» и спонтанные. В настоящей работе предприняли попытку использовать для классификации не только амплитудные статистические признаки, но также вейвлетные признаки, характеризующие осцилляторные компоненты ЭЭГ в интервале 50...500 мс относительно начала фиксации взгляда. Значения интегрального показателя точности классификации AUC при этом значимо выросли и составили 0,75 в среднем по группе из 8 человек. Предполагается, что дальнейшее совершенствование методики позволит превратить ИГМК в практически полезную технологию.

Ключевые слова: интерфейс мозг-компьютер, интерфейс глаз-мозг-компьютер, электроэнцефалограмма, ЭЭГ, управление с помощью взгляда, управляющая фиксация взгляда, айтрекинг, видеоокулография, классификация, вейвлеты

Финансирование: работа выполнена при частичной поддержке Российского научного фонда, грант № 14-28-00234 (получение экспериментальных данных и их предварительная обработка), и Российского фонда фундаментальных исследований, грант № 15-29-01344 (оценка вклада вейвлетных признаков в классификацию).

[><1 Для корреспонденции: Сергей Львович Шишкин

123182, г. Москва, пл. Академика Курчатова, д. 1; [email protected]

Статья поступила: 08.04.2016 Статья принята к печати: 15.04.2016

METOfl I HEMPOMHTEPQEMCbl

Brain-computer interfaces (BCIs) are systems for operating computers and other devices connected to them that make use of the detection of brain activity patterns associated with control commands. They have been designed primarily to assist paralyzed patients [1-3]. At the same time, the accuracy and operating speed of a vast majority of BCIs are still low. It is unclear if BCIs can be used outside the range of tasks where it is sufficient to issue very simple commands but important that these commands be given "straight from the brain" (for example, in poststroke rehabilitation [4]). Using BCI, a satisfactory spelling rate (50 characters per minute in healthy individuals) was achieved only in the recent study [5], in which rhythmic visual stimulation was used; it is still unclear if the use of such stimulation in BCI is safe.

Interestingly, all non-invasive BCIs with high accuracy and high speed rates utilize EEG response to visual stimuli the user directs his or her gaze at. It means that they can be used only if a patient does not suffer from any serious vision impairment or eye movement disorders and still has the ability to voluntarily direct his gaze towards specific screen areas associated with control commands (to fixate the gaze on virtual "buttons"). When this is the case, however, it is possible to control computers and other devices connected to them by detecting gaze direction using eye tracking (video-oculography).

Current methods of gaze-based control demonstrate relatively good accuracy, speed and usability when used for text entry [6]. However, attempts to apply them to a wider range of tasks are hampered by the so-called "Midas touch" problem [7]. Just like King Midas from the ancient Greek myth turned all things into gold by touching them, technical devices are non-selective in translating gaze fixations or eye movements into commands: their user issues commands even without an intent to issue them. This is because eye movements are a crucial component of visual function and are normally spontaneous, slipping conscious control easily even if attention is focused on them. Current solutions to this problem either make the control process very slow and tiring or can be used for a limited range of tasks.

As early as 1996, it was proposed to solve the Midas touch problem and create a high-performance universal interface by combining "eye-mouse" control with BCI [8]. Over a number of years the combination of those two technologies [9] was quite mechanical in nature and did not result in creating systems with fast response and good ergonomic properties. An innovative solution was suggested by Torsten Zander's group who turned to the idea of natural combination of eye tracking and BCI [8] within the framework of a new trend, namely, the development of the so-called "passive BCIs". This name was given to BCIs that responded to patterns of brain activity unrelated to deliberate efforts to issue a command using BCI [10]. Zander and his colleagues showed that eye fixations used for control ("control" fixations) can be differentiated from spontaneous (visual) fixations using the encephalogram (EEG) recorded during fixations, even if control markers appearing on EEG were not evoked intentionally (the subjects were not given additional tasks and were not presented with stimuli in the "control" position) [11]. However, in their study control could be implemented only by a long (1,000 ms) gaze fixation on a single screen target.

Our group has developed a method for an eye-brain-computer interface (EBCI) that allows for EEG-based classification of shorter fixations with a duration of 500 ms. In our experiment, subjects played Lines, a computer game, with their gaze only. Each move was made by fixating the gaze on one of 50 elements on the board. The classifier was trained

to differentiate between the EEG signals recorded during those fixations and EEG signals recorded during fixations on the same elements but with control switched off, i.e., during supposedly spontaneous fixations [12; Shishkin et al., in prep.]. Due to the reduction of fixation length, subjects perceived control as natural and comfortable. The number and location of control-sensitive visual elements in our method was limited by eye tracker capacities only. However, fixation-related amplitude features of the EEG components (we used those features in our early works) did not provide sufficient control detection accuracy for practical application of the technology.

In this study we analyze the possibility to improve the accuracy of the EBCI classifier that automatically differentiates between control gaze fixations and spontaneous ones by using features of oscillatory EEG components in addition to EEG amplitude features. Since short EEG intervals should be used, during which both amplitude and frequency components can display time dependency, and because of the high dimensionality of time-frequency data and other significant differences between them and amplitude data, it was necessary to develop a special scheme for extracting quantitative parameters of EEG components recorded during gaze fixations.

METHODS

The experiment

We used EEG recordings obtained in our early experimental study. Its results will be presented in another article [Shishkin et al., in prep.]; the article will also provide a detailed description of the methods used in the experiment.

Our study was conducted in compliance with the guidelines of the Declaration of Helsinki. The study enrolled 8 relatively healthy individuals (7 male and 1 female) aged 21-48 (mean age was 29). The subjects gave their informed consent. Gaze was recorded using EyeLink 1000 Plus eyetracker (SR Research, Canada). Fixations were detected on-line using variance criterion. Sinchronously, EEG from 19 electrodes (Fz, F3, F4, Cz, C3, C4, Pz, P1, P2, P3, P4, POz, PO3, PO4, PO7, PO8, Oz, O1, O2) and electrooculogram (EOG) were recorded using the actiCHamp system (BrainProducts, Germany). EOG was used to monitor EEG artifacts. Gaze direction, EEG and EOG were recorded at 500 Hz frequency.

Gaze-based control algorithms and the task the subjects performed were exactly the same as described in our preliminary study [12]. Here, only the most important details are listed. The subjects played Lines, a computer game that was modified so that all moves during the game could be performed by a sequence of 3 fixations, each exceeding a 500 ms duration threshold. Each sequence started with the fixation on a particular screen area, where a special "control on" indicator appeared after the threshold had been reached. EEG recorded during those fixations constituted the first class of data (control fixations). Another data class (non-control fixations) was constituted by EEG recorded during fixations that also exceeded the threshold but did not result in a move, according to game rules. Fixation-based game control, EEG/ EOG synchronization and recording of gaze fixation data were performed using the original software.

An average of 155 (from 120 to 184) control and 159 (from 114 to 208) non-control fixations was recorded for each subject.

Feature extraction

To extract EEG wavelet features, we chose the interval 50500 ms after fixation onset, because the preceding interval contained artifacts related to gaze shifts, and the subsequent interval could not be used for detecting the intention to issue a command in the on-line mode. In the analyzed interval there were almost no artifacts, so we did not apply any procedures for their correction or removal. In our early work we showed [12; Shishkin et al., in prep.] that in our EBCI paradigm, a considerable difference in EEG amplitudes between control and non-control fixations was typical for the second half of fixation interval only. Therefore, we used the interval 200-500 ms after fixation onset to obtain amplitude features in the current study.

Amplitude features were obtained by averaging amplitude values in each EEG channel separately in overlapping 50 ms windows. To reduce the influence of slow oscillations and direct current component, the baseline was corrected by subtracting the mean value for the interval 200-300 ms after fixation onset from those averaged values. The obtained "raw" amplitude features constituted a feature vector that characterized a trial corresponding to one fixation.

Wavelet features were obtained using Morlet wavelet transform. The scale range corresponded to the frequency range of 5-30 Hz. The higher frequency corresponded to the scale, the more wavelet coefficients were used to describe the trial. To reduce noise produced by irrelevant features, only 30 % of the wavelet time-frequency features were used, namely those that differed most considerably between spontaneous and control fixations (those that had the highest coefficient of determination, R2).

Selected features were processed using Principal Component Analysis (PCA). It was applied separately to amplitude features and wavelet features. 80 components with highest variance for each feature type were selected. They constituted new sets of features. Before and after PCA, z-score normalization was applied either to all values of each feature (in all trials) or to all features within a single trial (for amplitude features and wavelet features separately). Normalization within a single trial was considered a way of adaptation to local feature level that could gradually vary over time.

Under Curve; here, "Curve" refers to the Receiver Operating Characteristic (ROC) curve), an integral performance index widely applied in similar studies. It shows to what extent classification results differ from random for various classifier threshold values that can be selected to separate classes with various ratios of various types of errors, depending on the specific purpose the classifier is used for. If classification results do not differ from random guess, AUC value goes to 0.5; if the classifier does not make any errors, AUC equals to 1. To compare AUC values in case of various feature sets, multivariate analysis of variance (MANOVA) and Bonferroni post hoc test were applied using Statistica 7.0 software (StatSoft, USA).

RESULTS

With all methods of feature extraction, individual values of classification accuracy (AUC) were above 0.5, group mean was no less than 0.66; however, AUC mean values were considerably different (fig. 1).

3-way MANOVA (see table below; all three factors were with repeated measures) applied to individual AUC values showed that classification accuracy was dependent on the feature set factor (X = 0.06, F(2.6) = 49, p = 0.0002), while the effects of other factors and interaction between factors in all their combinations were not statistically significant. Benferroni post hoc test showed that the difference between amplitude and amplitude-wavelet feature sets was statistically significant (p = 0.006); no statistically significant difference between amplitude and wavelet (p = 0.34) and between wavelet and amplitude-wavelet (p = 0.16) feature sets was found. The set that consisted of amplitude features only had the lowest classification accuracy. The best results were shown by the combined set (amplitude and wavelet features grouped together). With the combined EEG feature set, AUC group mean increased by 0.05-0.08 (depending on the method used for normalization) compared to the amplitude set. AUC group mean was 0.75 ± 0.04 (M ± SD) with features normalized both before and after PCA, and 0.75 ± 0.06 with features normalized before PCA and within trials after PCA.

Fig. 2 shows individual results for the feature extraction

EEG-based classification of control and non-control fixations

For classification, linear discriminant analysis with shrinkage regularization was used. It ensured effective training with small training sets (like the one that was available in this study) even if feature dimensionality was relatively high; it also proved to be highly effective in the BCIs based on event-related potentials [13, 14].

Classification quality was assessed using 5-fold cross-validation. Classifier training, feature selection, calculation of mean values and standard deviations for feature normalization (in case it was applied trialwise), as well as dimensionality reduction, were carried out on the data used as the training set. The derived feature selection rule, mean and standard deviations for corresponding value sets, weight matrix for selected components, and weight of the trained classifier were applied to the rest of data regarded as a test sample. Due to such arrangement of cross-validation, it was possible to reconstruct a real situation of how a classifier can be used online in a BCI.

As a classification quality metric, we used AUC (Area

Z2: Features

Z2: Trials

Z1

Features Trials

njn Tri.

AB

AB

Fig. 1. Dependence of classification accuracy (AUC) for gaze fixations (control and non-control) on the method used for feature extraction from EEG recorded during gaze fixation

Legend: А—amplitude features only, В — wavelet features only, АВ — combined (amplitude-wavelet) set of features; Z1 — normalization type before PCA; Z2 — normalization type after PCA; features: normalization of separate features; trials — normalization of features within a single trial. Vertical lines represent 95 % confidence intervals.

A

B

A

B

METOfl I HEMPOMHTEPOEMCbl

Effect of feature extraction methods on classification accuracy (AUC)

Factors Wilks' 1 F df (Effect, Error) P

Z1 (normalization before PCA) 0.71 2.85 1, 7 0.1354

Z2 (normalization after PCA) 0.67 3.43 1, 7 0.1064

Feature set (A, B, AB) 0.06 49.01 2, 6 0.0002

Z1 x Z2 0.86 1.18 1, 7 0.3139

Z1 x features 0.48 3.26 2, 6 0.1101

Z2 x features 0.68 1.41 2, 6 0.3138

Z1 x Z2 x features 0.79 0.81 2, 6 0.4881

Note. Using multivariate analysis of variance (MANOVA), AUC dependence on normalization before PCA (Z1), normalization after PCA (Z2), feature set type (amplitude, wavelet, amplitude-wavelet) and their interaction (represented by x) were analyzed. Statistically significant effect is shown in bold (p <0.05).

method that resulted in the highest group averaged AUC. Individual curves on the graph provide values of various types of errors that could be observed with various classification threshold values. Specifically, of particular importance is EBCI classifier sensitivity, i.e., the rate of correctly identified control fixations, under the condition of low false positive rate. As shown in fig. 2, when fixating false positive rate at 0.1 (which can be achieved by selecting the corresponding classifier threshold using a separate set); only one subject demonstrated sensitivity lower than 0.2, while another subject had sensitivity above 0.5 and the rest scores were in the interval between those two values.

be successfully classified if the classifier is trained on individual data, in particular in the BCI paradigm [15-18]. Still, high dimensionality of such data requires an especially elaborated approach to different stages of analysis, with a larger number of subjects involved in such studies whenever possible. We have just made our first steps in this direction, but similar results obtained with various methods of data normalization may indicate a relatively high robustness of the proposed scheme for data preprocessing and informative features extraction, and its good prospects for the EBCI development.

CONCLUSIONS

DISCUSSION

Improvement of classifier performance is the key factor in the development of an EBCI that could detect relatively short control gaze fixations using EEG intervals recorded during such fixations, as only single signal intervals with the duration of a few hundred milliseconds are available for analysis in such a BCI paradigm.

Quality of classification with low level of false alarms should be discussed separately. In EBCI, it is easy to provide a safety net in case control fixation is not identified. If the interface does not respond when the threshold of 500 ms has been reached, the users can continue fixate their gaze, and the system will respond after the additional (for example, 1,000 ms) threshold has been reached, even without the response from the EEG classifier. We can make a supposition that with the EBCI that has this kind of safety net, the brain of the user interested in speeding up interface activation can learn to produce the EEG pattern that accompanies control fixations and ensures a considerably more frequent response of the classifier. However, for that a minimum entry-level control is necessary. As fig. 2 demonstrates, the scheme for signal preprocessing and feature extraction developed by the authors of this work would help some subjects evoke a faster interface response in half of control fixations with relatively low false alarm rate (0.1)

While we already can speculate on the nature of amplitude features that can be used for classification in our EBCI, assuming that they might be related to the presence of negative potential associated with feedback expectation in case of interface response [Shishkin et al., in prep.], the nature of wavelet features still requires further elucidation. It should be noted that patterns of EEG frequency components typical for various brain states are highly individual and their specifics can be only partially observed on the group level. However, they can

In this work we made the first attempt to use the spatiotemporal EEG representation, i.e. representation of EEG frequency components as a function of time from the fixation onset. The use of these features allowed us to achieve classification accuracy at least as good as classification accuracy based on amplitude features we used in previous works. Moreover, a combination of both feature sets led to classification accuracy improvement. We believe that further improvement of computation methods will allow us to closely approach a practical application of eye-brain-computer interfaces that combine the main advantages of standard BCIs and control systems based on eye tracking.

False positive rate (1 - specificity)

Fig. 2. ROC curves (Receiver Operating Characteristic curves) for all subjects when using the amplitude-wavelet feature set, feature normalization before PCA, trial normalization after PCA (feature extraction method that allowed for the highest group averaged AUC value). Red line shows random classification, grey vertical line provides an example of strict requirements to the specificity of classifier (false positive rate = 0.1)

References

1. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain-computer interfaces for communication and control. Clin Neurophysiol. 2002; 113 (6): 767-791.

2. BNCI Horizon 2020. The Future of Brain/Neural Computer Interaction: Horizon 2020. Appendix C: End Users. 7th Framework Programme of the European Union. Available from: http://bnci-horizon-2020.eu/roadmap.

3. Kaplan AYa, Kochetova AG, Shishkin SL, Basyul IA, Ganin IP, Vasilyev AN, Liburkina SP. Experimental and theoretical foundations and practical implementation of brain-computer interface technology. Bulletin of Siberian Medicine. 2013; 12 (2): 21-9. Russian.

4. Kaplan AYa. Neurophysiological foundations and practical realizations of the brain-machine interfaces in the technology in neurological rehabilitation. Human Physiology. 2016; 42 (1): 10310. Russian.

5. Chen X, Wang Y, Nakanishi M, Gao X, Jung TP, Gao S. Highspeed spelling with a noninvasive brain-computer interface. Proc Natl Acad Sci U S A. 2015; 112 (44): E6058-67.

6. Majaranta P. Text entry by eye gaze [dissertation]. Tampere, Finland: University of Tampere; 2009. Available from: http:// tampub.uta.fi/handle/10024/66483.

7. Jacob RJK. The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Transactions on Information Systems. 1991; 9 (2): 152-69.

8. Velichkovsky BM, Hansen JP. New technological windows into mind: there is more in eyes and brains for human-computer interaction. In: Proceedings of the SIGCHI conference on Human factors in computing systems; 1996 Apr 13-18; Vancouver, BC, Canada. New York: ACM; 1996. p. 496-503.

9. Pfurtscheller G, Allison BZ, Bauernfeind G, Brunner C, Escalante TS, Scherer R, et al. The hybrid BCI. Front Neurosci. 2010; 4: 42. Available from: http://journal.frontiersin.org/article/10.3389/ fnpro.2010.00003/full.

10. Zander TO, Kothe C. Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general. J Neural Eng. 2011; 8 (2): 025005.

Литература

1. Wolpaw JR, Birbaumer N, McFarland DJ, Pfurtscheller G, Vaughan TM. Brain-computer interfaces for communication and control. Clin Neurophysiol. 2002; 113 (6): 767-791.

2. BNCI Horizon 2020. The Future of Brain/Neural Computer Interaction: Horizon 2020. Appendix C: End Users. 7th Framework Programme of the European Union. Доступно по ссылке: http:// bnci-horizon-2020.eu/roadmap

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

3. Каплан А. Я., Кочетова А. Г., Шишкин С. Л., Басюл И. А., Ганин И. П., Васильев А. Н., Либуркина С. П. Экспериментально-теоретические основания и практические реализации технологии «интерфейс мозг-компьютер». Бюллетень сибирской медицины. 2013; 12 (2): 21-9.

4. Каплан А. Я. Нерофизиологические основания и практические реализации технологии мозг-машинных интерфейсов в неврологической реабилитации. Физиология человека. 2016; 42 (1): 118-27.

5. Chen X, Wang Y, Nakanishi M, Gao X, Jung TP, Gao S. Highspeed spelling with a noninvasive brain-computer interface. Proc Natl Acad Sci U S A. 2015; 112 (44): E6058-67.

6. Majaranta P. Text entry by eye gaze [диссертация]. Tampere, Finland: University of Tampere; 2009. Доступно по ссылке: http://tampub.uta.fi/handle/10024/66483.

7. Jacob RJK. The use of eye movements in human-computer interaction techniques: what you look at is what you get. ACM Transactions on Information Systems. 1991; 9 (2): 152-69.

8. Velichkovsky BM, Hansen JP. New technological windows into mind: there is more in eyes and brains for human-computer interaction. In: Proceedings of the SIGCHI conference on Human factors in computing systems; 1996 Apr 13-18; Vancouver, BC,

11. Protzak J, Ihme K, Zander TO. A passive brain-computer interface for supporting gaze-based human-machine interaction. In: Stephanidis C, Antona M, editors. Universal Access in HumanComputer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. Springer; 2013. p. 662-71.

12. Shishkin SL, Svirin EP, Nuzhdin YO, Fedorova AA, Trofimov AG, Slobodskoy-Plusnin JY, et al. Learn waiting! Contingent negative variation may help you to control with your eye-gaze. In: Pechenkova EV, Falikman MV, editors. Cognitive Science in Moscow: New Studies. Ed. by E. V. Pechenkova, M. V. Falikman. M.: BukiVedi; 2015. p. 486-91. Russian.

13. Blankertz B, Lemm S, Treder M, Haufe S, Muller KR. Single-trial analysis and classification of ERP components — a tutorial. NeuroImage. 2011; 56 (2): 814-25.

14. Schultze-Kraft M, Birman D, Rusconi M, Allefeld C, Gorgen K, Dahne S, et al. The point of no return in vetoing self-initiated movements. Proc Natl Acad Sci U S A. 2016; 113 (4): 1080-5.

15. Ivanitsky GA. Recognition of the task type in the process of its mental solving by a few-second EEG record using the learned classifier. Zh Vyssh Nerv Deiat I P Pavlova. 1997; 47: 743-7. Russian.

16. Dat TH, Shue L, Guan C. Electrocorticographic signal classification based on time-frequency decomposition and nonparametric statistical modeling. In: 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. 2006. P. 2292-5.

17. Roik AO, Ivanitskii GA. A neurophysiological model of the cognitive space. Neuroscience and Behavioral Physiology. 2013; 43 (2): 193-9.

18. Frolov A, Husek D, Bobrov P. Comparison of four classification methods for brain-computer interface. Neural Network World. 2011; 21 (2): 101-15.

19. Frolov A, Husek D, Bobrov P, Mokienko O, Tintera J. Sources of electrical brain activity most relevant to performance of brain-computer interface based on motor imagery. In: Fazel-Rezai R, editor. Brain-Computer Interface Systems: Recent Progress and Future Prospects. InTech; 2013. p. 175-93.

Canada. New York: ACM; 1996. p. 496-503.

9. Pfurtscheller G, Allison BZ, Bauernfeind G, Brunner C, Escalante TS, Scherer R, et al. The hybrid BCI. Front Neurosci. 2010; 4: 42. Доступно по ссылке: http://journal.frontiersin.org/ article/10.3389/fnpro.2010.00003/full.

10. Zander TO, Kothe C. Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general. J Neural Eng. 2011; 8 (2): 025005.

11. Protzak J, Ihme K, Zander TO. A passive brain-computer interface for supporting gaze-based human-machine interaction. In: Stephanidis C, Antona M, editors. Universal Access in HumanComputer Interaction. Design Methods, Tools, and Interaction Techniques for eInclusion. Springer; 2013. p. 662-71.

12. Шишкин С. Л., Свирин Е. П., Нуждин Ю. О., Федорова А. А., Трофимов А. Г., Слободской-Плюснин Я. Ю. и др. Учитесь ждать! Условно-негативная волна поможет отдавать команды взглядом? В сборнике: Печенкова Е. В., Фаликман М. В., редакторы. Когнитивная наука в Москве: новые исследования. М.: БукиВеди; 2015. с. 486-91.

13. Blankertz B, Lemm S, Treder M, Haufe S, Muller KR. Single-trial analysis and classification of ERP components — a tutorial. NeuroImage. 2011; 56 (2): 814-25.

14. Schultze-Kraft M, Birman D, Rusconi M, Allefeld C, Gorgen K, Dahne S, et al. The point of no return in vetoing self-initiated movements. Proc Natl Acad Sci U S A. 2016; 113 (4): 1080-5.

15. Иваницкий Г. А. Распознавание типа решаемой в уме задачи по нескольким секундам ЭЭГ с помощью обучаемого классификатора. Журнал высшей нервной деятельности им. И. П. Павлова. 1997; 47: 743-7.

МЕТОД I НЕЙРОИНТЕРФЕЙСЫ

16. Dat TH, Shue L, Guan C. Electrocorticographic signal classification based on time-frequency decomposition and nonparametric statistical modeling. Conf Proc IEEE Eng Med Biol Soc. 2006; 1: 2292-5. PMID: 17945704.

17. Роик А. О., Иваницкий Г. А. Нейрофизиологическая модель когнитивного пространства. Журнал высшей нервной деятельности им. И. П. Павлова. 2011; 60 (6): 688-96.

18. Frolov A, Husek D, Bobrov P. Comparison of four classification

methods for brain-computer interface. Neural Network World. 2011; 21 (2): 101-15.

19. Frolov A, Husek D, Bobrov P, Mokienko O, Tintera J. Sources of electrical brain activity most relevant to performance of brain-computer interface based on motor imagery. In: Fazel-Rezai R, editor. Brain-Computer Interface Systems: Recent Progress and Future Prospects. InTech; 2013. p. 175-93.

i Надоели баннеры? Вы всегда можете отключить рекламу.