Научная статья на тему 'Monitoring Methods for Biocontrol of Robotic Wheelchairs'

Monitoring Methods for Biocontrol of Robotic Wheelchairs Текст научной статьи по специальности «Медицинские технологии»

CC BY
3
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
neurointerface / eye-tracking / biocontrol / robotic wheelchair / condition monitoring / disabled people

Аннотация научной статьи по медицинским технологиям, автор научной работы — Tatiana V. Istomina, Elmin V. Bayramov, Elena V. Petrunina, Denis K. Pecherskiy, Elena V. Kopylova

The challenges that arise in the process of developing robotic means of locomotion controlled by people with disabilities are examined in this paper. In addition, the methods of managing modern wheelchairs are analyzed. In order to prevent the occurrence of critical situations in the process of persons with disabilities movement, the ways of monitoring their condition are examined in the following work. Moreover, a comprehensive approach that increases the reliability of biocontrol of robotic wheelchairs has been proposed.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Monitoring Methods for Biocontrol of Robotic Wheelchairs»

Monitoring Methods for Biocontrol of Robotic Wheelchairs

Tatiana V. Istomina1,2*, Elmin V. Bayramov1, Elena V. Petrunina1, Denis K. Pecherskiy3, and Elena V. Kopylova2

1 Moscow Polytechnic University, 38 Bolshaya Semyonovskaya str., Moscow 107023, Russia

2 Moscow Power Engineering Institute, 14 Krasnokazarmennaya str., building 1, Moscow 111250, Russia

3 Moscow State University of Food Production, 11 Volokolamskoe shosse, Moscow 125080, Russia

*e-mail: istom@mail.ru

Abstract. The challenges that arise in the process of developing robotic means of locomotion controlled by people with disabilities are examined in this paper. In addition, the methods of managing modern wheelchairs are analyzed. In order to prevent the occurrence of critical situations in the process of persons with disabilities movement, the ways of monitoring their condition are examined in the following work. Moreover, a comprehensive approach that increases the reliability of biocontrol of robotic wheelchairs has been proposed. © 2023 Journal of Biomedical Photonics & Engineering.

Keywords: neurointerface; eye-tracking; biocontrol; robotic wheelchair; condition monitoring; disabled people.

Paper #8969 received 2 May 2023; accepted for publication 6 Sep 2023; published online 13 Nov 2023. doi: 10.18287/JBPE23.09.040305.

1 Introduction

In the modern world, it is urgent to improve the quality of life and socialization of disabled people by enhancing the technical means that enable their mobility. This is considered to be especially significant for people with disorders of the musculoskeletal system [1] with the aim of increasing the safety of their movement in robotic wheelchairs with advanced means of control.

The primary challenges posed by robotic vehicles driven by people with disabilities are related to the poor quality of the control systems. As well as a high probability of stressful situations, which can cause critical biophysiological states that in turn reduce the effectiveness of management even more.

Modern methods of wheelchair control involve the use of a human-machine interface based on the perception of tactile, verbal, gesture, or visual commands. Recently, a new direction in this field has become widely spread. This approach is called a neurointerface (or brain-computer interface), based on the application of biocontrol methods that directly operate the signals of human brain activity it has advantages over other types of human-machine interfaces which are suitable for people with disabilities [2].

However, the usage of tactile, speech, and gesture methods of controlling a robotic wheelchair is not available in such cases where people lack the necessary motor functions. Therefore, the most promising method

to opt for is the employment of eye-tracking and neurointerface.

To prevent critical situations during wheelchair mobility, it is necessary to ensure high reliability of recognition of the robotic wheelchair control signals, which cannot be provided by each of the methods under consideration in isolation. In addition, regular monitoring of persons with locomotor disabilities during movement is necessary.

It is also important to use an individual approach to set up and test the control system, which can be achieved by applying operator and system training in virtual reality by simulating stressful driving situations.

The paper proposes a comprehensive approach based on a combination of virtual reality learning and control methods based on eye-tracking and neural interface. This approach will improve the efficiency and reliability of biocontrol of robotic wheelchairs.

2 Models and Methods

A number of current biocontrolled wheelchair designs are based on an eye monitor interface, the use of such an interface reduces the viewing angle and complicates wheelchair control. Eye gesture detection is effective in the design of a robotic wheelchair because it avoids the need for a monitor interface [3].

The generalized algorithm of the eye-tracking method includes the following steps:

This paper was presented at the IX International Conference on Information Technology and Nanotechnology (ITNT-2023), Samara, Russia, April 17-21, 2023.

RD(mm) = p Ü (Gx )2 + (Gr )2

Fig. 1 The eye-tracking calibration process.

- eye-tracker calibration using standard hardware and software, video camera scanning of the reflected near-infrared signal received by the subject's pupil;

- software filtering and correction of deviations to determine the current coordinates of the gaze position, which are taken into account when controlling the robotic wheelchair.

The disadvantage of using the eye-tracking method is that the parameters transmitted to the control mechanisms need to be constantly updated, assessing the accuracy of the eye movement data of a particular subject. At the same time, the accuracy of the eye tracker's real-time measurements of eye coordinates is significantly affected by various types of interference and stochastic conditions.

Let us consider a generalized eye-tracking algorithm.

1. The eye-tracker calibration process is performed using standard hardware and software.

2. Eye-tracker emits light in the near infrared range.

3. The eye-tracker light is reflected in the subject's eyes.

4. The reflections are recorded by the eye-tracker reading camera (Fig. 1).

5. The coordinates of the current position of the gaze after filtering are determined.

The gaze position coordinates (X and Y values in pixels) must be converted to frontal gaze angles and gaze offset and tilt angles for further processing. The unprocessed left and right eye gaze coordinates (Xieft, Yf and Xright, Yright) obtained from the eye tracker are used to estimate gaze angles and gaze offset [4]:

GazeX = mean

(X, .+ X , ^ left right

Gazer = mean

(r + r ^

left right

The display distance (RD) is the distance on the screen between a coordinate point and a specific viewpoint with coordinates (Gx, Gy). For example, if the tracker is fixed directly under the screen, and the source of the coordinates is the center of the screen, we get the following expression:

where p is the pixel pitch of the display in units of mm/pixel.

After obtaining the coordinates, further data processing in the eye tracker channel consists of the following procedures: detection of fixations and saccades. It should be noted that "fixations" and "saccades" detected by the software are the result of a special neural network algorithm of data processing with certain training network settings to increase the detection efficiency.

One biofeedback channel is not enough to provide the necessary reliability of recognition of commands of the person under study, therefore, to increase the reliability and safety of the biofeedback interface, an additional parallel channel should be chosen.

Currently, the brain-computer interface is being actively developed all over the world, its implementation is technically quite complicated and requires a long personal sets.

Let us consider using brain-computer interface as an additional biofeedback channel.

Brain-computer interfaces (BCIs) provide a direct pathway from the human brain to the computer using techniques used to record and decode brain output. The first BCI systems were used primarily for stroke rehabilitation. More recently, BCIs have been widely used to improve the quality of life of patients with disabilities while operating assistive devices such as electric wheelchairs and prosthetics. Recently, BCIs have been widely used not only for people with disabilities, but also for people without disabilities. These BCIs are mostly noninvasive systems with an electroencephalogram (EEG) monitoring function that can be integrated into portable devices.

In general, BCIs contain five main processing steps: data acquisition, preprocessing, information feature extraction, classification, and feedback. EEG signals have a low signal-to-noise ratio and tend to be nonstationary over time. Expert knowledge, prior knowledge, and judgment about the subject domain are traditional bases for developing methodology and training classifiers that apply only to specific datasets. Such a strategy is not easily scalable to any other experiment or dataset.

The physiological basis of the correlation between brain activity and motor activity is that body movement can generate mu (8-12 Hz) and beta (16-26 Hz) rhythms with event (de-)synchronization (ERS/ERD) in motor-sensory brain regions [4]. Some studies of motor activity-based devices (wheelchairs, prostheses, robots) have medical applications and provide further development of complementary technologies for humans. Common feature extraction algorithms for classification of motor activity from EEG are based on detection of spatial patterns (SPs).

The idea of Common Spatial Pattern (CSP) is to find a set of spatial filters that optimally distinguish several

classes of EEG recordings. The filter-bank CSP (FBCSP) algorithm, which uses a feature selection operator, allows selecting the most optimal spatial filter for feature extraction. The advantages of this method are its simplicity and accuracy. Other CSP-based methods similarly extract valuable components of EEG signals after special software analysis.

Unfortunately, EEG characteristics change significantly over time and vary from person to person. Therefore, it is necessary to create more reliable and more generalized algorithms for feature extraction, especially for biocontrol applications.

Deep learning has made great strides in computer vision, natural language processing, and speech recognition. Current trends in these advances seem to suggest that neural computing units, such as convolutional layers in convolutional neural networks (CNNs), can extract implicit features from signals to improve system performance.

Deep learning development is also a promising one, particularly in EEG feature extraction. However, the application of Deep learning in EEG-based interface control system (ICS) has two problems: first, the low signal-to-noise ratio (SNR) and time-dependent covariates of the EEG signal make feature extraction difficult, and second, the insufficient data set and individually different EEG signals among subjects result in low artificial neural networks learning efficiency.

A multi-channel method of controlling a mobile robotic wheelchair is known, which implements an extended brain-computer interface, with the control system built on the parallel operation of independent channels (neurointerface, speech and gesture control). However, in complex situations, this approach leads to a "conflict of interest" due to the inability to simultaneously execute opposite commands from multiple channels. To resolve such conflicts, methods of coordinated control and decomposition based on quality assessment of control channels are employed [5].

3 Results

This paper proposes an integrated approach based on the coordinated application of the two most promising methods of wheelchair biocontrol, namely eye-tracking and neural interface, with the commands either confirming each other, or using only the channel that the control data processing system gives priority to at a particular point in time. Thus, even in the case of conflicting user commands, the proposed approach ensures high reliability of the wheelchair control.

Experimental studies of EEG signals under normal and stress conditions using NVX24 equipment of MKS LLC (Moscow) are being actively carried out, an example is shown in Fig. 2.

Thirty people (including people with disabilities) aged 21 ± 3 years participated in this study for 3 months. Participants performed a series of special tasks over several sessions, yielding a total of 760 trials. All

recording sessions were performed in our laboratory on MCS NVX24 digital DC EEG 2.02 equipment.

Fig. 2 Example of experimental EEG signal extraction using dry gel from MKS.

EEG signals dataset has examples per trial total of 2870 trials. Augmentation techniques were used to augment the data collected. CNNs can successfully classify spectrograms. In this study, it is appropriate to analyze the data using the classical medical features that are standard in EEG spectral analysis. Thus, we introduce a parallel CNN to classify integrated eye-tracking and EEG results.

The architecture of the CNN is presented in Fig. 3. Detailed architecture of the proposed network is presented in Table 1. In the Table 1, T is the number of time points, C is the number of channels, Fs is the spatial filter, Ft is the time filter, D is the ratio of Ft to Fs, Nc is the number of classes. In the proposed method, we introduce a parallel CNN with 2 inputs - for data from the eye tracker channels and 4 inputs - the signal amplitude of a certain EEG frequency.

In the proposed architecture, the output feature maps are fed into a spatial convolutional module, with the kernel length of the temporal filter Ft set to 32 and 16. Dropout layer follows the pooling layer to avoid overfitting, where dropout rate p = 0.5.

In the proposed study, we use a network learning algorithm similar to the CNN learning algorithm. To test the feasibility and effectiveness of our proposed method, we conduct a series of motor imagery (MI) classification experiments. The results on the datasets prove that the CNN performance has a predictive value of 76.32.

As part of this integrated approach, an original algorithm and methodology for setting parameters of the neurointerface and testing the wheelchair control system based on the virtual reality glasses and simulation of various traffic situations are being developed (Fig. 4).

The results of the experimental evaluation of the EEG parameters in different stress situations in the group of persons with disabilities under study show that the highest values of the average signal power deviation ratio values in the "stress/peace" categories are found in the delta and beta rhythms of the EEG.

Fig. 3 The architecture of CNN.

Table 1 Detailed architecture of the proposed network.

Layer Filters Kernel Stride Parameters Output Activation Padding

Input (C, T)

Reshape (1, T, C)

TimeConvi Ft (16, 1) (1, 1) 16 * Ft (Ft, T, C) ReLu Same

TimeConv2 Ft (16, 1) (1, 1) 16 * Ft (Ft, T, C) ReLu Same

TimeConv3 Ft (32, 1) (1, 1) 32 * Ft (Ft, T, C) ReLu Same

TimeConv4 Ft (32, 1) (1, 1) 32 * Ft (Ft, T, C) ReLu Same

TimeConvs Ft (32, 1) (1, 1) 32 * Ft (Ft, T, C) ReLu Same

TimeConv6 Ft (32, 1) (1, 1) 32 * Ft (Ft, T, C) ReLu Same

BatchNorm 2 * Ft (4 * Ft, T, C)

Concatenate (4 * Ft, T, C)

SpatialConv Fs (1, C (1, 1) C*4*Ft*Fs (Fs, T, 1) ReLu Valid

BatchNorm 2 * Fs (Fs, T, 1)

MaximumPool (75, 1) (15, 1) (Fs, T//15, 1) Valid

Dropout (Fs, T//15, 1)

Linear (Fs, T, 1) Square

Dropout (Fs, T//15, 1)

Classifier Nc (T//15, 1) (1, 1) Fs*(T//15)*Nc Nc Linear Valid

Fig. 4 An example of a virtual reality scene for teaching people with locomotor disabilities.

Fig. 5 Mason A4100 smartwatch for patient monitoring.

Thus, for biocontrol in general, and for individual adjustment and testing of neurointerface operation it is reasonable to choose these very EEG channels [6]. In order to prevent critical situations in the process of disable people's movement, it is essential to ensure high reliability of recognition of robotic wheelchair control signals, which cannot be provided by each of the methods considered in isolation. In addition, permanent monitoring of persons with locomotor disabilities during their mobility is required.

It is also of great significance to employ an individualized approach to setting up and testing the control system, which can be ensured by applying operator and system training in virtual reality through simulation of stressful driving situations.

The paper proposes an integrated approach that is based on the combination of virtual reality training and control methods powered by eye-tracking and neurointerface, allowing improving both the efficiency and reliability of biocontrol of robotic wheelchairs. The occurrence of critical health impairments among disabled people during their movement in mobile wheelchairs can lead to emergency situations, in order to prevent which the ways of monitoring their condition have been analyzed and the following equipment has been selected.

The Mason A4100 smart watch [7, 8] provides blood pressure monitoring, pulse measurement and control,

user identification by ECG, breath and stress monitoring, location, and fall tracking (Fig. 5).

The H8S-PRO complex (Fig. 6) is designed for a wide range of mobile studies with tracking of neurodynamics and visual behavior and is produced by Neurobotics LLC (Moscow) [9].

In the future, to monitor critical states associated with cramps and muscle tremors, it is planned to use Neurosens inertial sensors produced by Neurosoft, which transmit three-dimensional coordinates of the sensor position on the body and the process of movement to a notebook computer (Fig. 7). In addition, this sensor has a modification in which the electromyographic signal is withdrawn [10].

The practical implementation of such systems is widely discussed around the world. The analysis of achievements in this area has shown the promise of developing an individual user interface, which can be used both for private use and to match the parameters of a particular person with serial devices of universal type, with multilevel customization based on additive manufacturing technologies [8].

Fig. 6 Complex for mobile tracking H8S-PRO.

Fig. 7 3D motion sensor.

4 Conclusion

In this paper, the most applicable ways of monitoring health conditions during the process of movement of disabled people to prevent the occurrence of critical situations are selected. In addition, it is proposed to employ an integrated approach based on a combination of eye-tracking and neurointerface methods. This allows increasing the reliability of mobile wheelchair

biocontrol, during which the biophysical parameters are constantly monitored and analyzed. What is more, the inhibition mechanism is activated when critical states are detected.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Disclosures

The authors declare no conflict of interest.

References

1. E. V. Petrunina, T. V. Istomina, V. V. Istomin, N. V. Trub, and E. V. Kopylova, "Research cyber-biophysical system for cognitive adaptation of people with disabilities," Mathematical Methods in Technologies and Technics 7, 68 (2021).

2. A. Bashashati, M. Fatourechi, R. K. Ward, and G. E. Birch, "A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals," Journal of Neural Engineering 4(2), R32-R57 (2007).

3. A. Kar, "MLGaze: Machine learning-based analysis of gaze error patterns in consumer eye tracking systems," Vision 4(2), 25 (2020).

4. T. I. Voznenko, E. V. Chepin, and G. A. Urvanov, "The Control System Based on Extended BCI for a Robotic Wheelchair," Procedia Computer Science 123, 522-527 (2018).

5. C. Liu, J. Jin, R. Xu, S. Li, C. Zuo, H. Sun, X. Wang, and A. Cichocki, "Distinguishable spatial-spectral feature learning neural network framework for motor imagery-based brain-computer interface," Journal of Neural Engineering 18(4), 0460e4 (2021).

6. Z. Tayeb, J. Fedjaev, N. Ghaboosi, C. Richter, L. Everding, X. Qu, Y. Wu, G. Cheng, and J. Conradt, "Validating Deep Neural Networks for Online Decoding of Motor Imagery Movements from EEG Signals," Sensors 19(1), 210 (2019).

7. Wearable Technologies, (accessed 25 September 2022). [https://www.wearable-technologies.com/2021/10/mason-unveils-the-first-ever-customizable-smartwatch-for-patient-monitoring-hospitality-or-safety].

8. "The Mason A4100 Smartwatch," Mason, (accessed 12 September 2022). [https://bymason.com/mason-smartwatch-a4100/].

9. "NeuroPlay-8C-PRO," Neuroassistive Technologies (accessed 10 October 2022). [https://neuroassist.tech/neuroplay-8c-pro/].

10. Neurosoft, (accessed 21 September 2022). [https://neurosoft.com/].

i Надоели баннеры? Вы всегда можете отключить рекламу.