Научная статья на тему 'An algorithm for detecting events in video EEG monitoring data of patients with craniocerebral injuries'

An algorithm for detecting events in video EEG monitoring data of patients with craniocerebral injuries Текст научной статьи по специальности «Медицинские технологии»

CC BY
147
19
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Компьютерная оптика
Scopus
ВАК
RSCI
ESCI
Ключевые слова
video EEG monitoring data / epileptic seizure / optical flow / wavelets / ridges of wavelet spectrograms / clinical applications.

Аннотация научной статьи по медицинским технологиям, автор научной работы — D.M. Murashov, Y.V. Obukhov, I.A. Kershner, M.V. Sinkin

One of the problems solved by analyzing the data of long-term Video EEG monitoring is the differentiation of epileptic and artifact events. For this, not only multichannel EEG signals are used, but also video data analysis, since traditional methods based on the analysis of EEG wavelet spectrograms cannot reliably distinguish an epileptic seizure from a chewing artifact. In this paper, we propose an algorithm for detecting artifact events based on a joint analysis of the level of the optical flow and the ridges of wavelet spectrograms. The preliminary results of the analysis of real clinical data are given. The results show the possibility in principle of reliable distinguishing non-epileptic events from epileptic seizures.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «An algorithm for detecting events in video EEG monitoring data of patients with craniocerebral injuries»

SHORT COMMUNICATIONS

An algorithm for detecting events in video EEG monitoring data of patients with craniocerebral injuries

D.M. Murashov1, Y.V. Obukhov2,I.A. Kershner2, M.V. Sinkin3

1 Federal Research Center "Computer Science and Control" of Russian Academy of Sciences,

119333, Russia, Moscow, Vavilov st., 40,

2 Kotel'nikov Institute ofRadio Engineering and Electronics ofRussian Academy of Sciences,

125009, Russia, Moscow, Mokhovaya str., 11-7,

3 Sklifosovsky Research Institute for Emergency Medicine of Moscow Healthcare Department,

129090, Russia, Moscow, Bolshaya Sukharevskaya Square, 3

Abstract

One of the problems solved by analyzing the data of long-term Video EEG monitoring is the differentiation of epileptic and artifact events. For this, not only multichannel EEG signals are used, but also video data analysis, since traditional methods based on the analysis of EEG wavelet spectrograms cannot reliably distinguish an epileptic seizure from a chewing artifact. In this paper, we propose an algorithm for detecting artifact events based on a joint analysis of the level of the optical flow and the ridges of wavelet spectrograms. The preliminary results of the analysis of real clinical data are given. The results show the possibility in principle of reliable distinguishing non-epileptic events from epileptic seizures.

Keywords: video EEG monitoring data, epileptic seizure, optical flow, wavelets, ridges of wavelet spectrograms, clinical applications.

Citation: Murashov DM, Obukhov YV, Kershner IA, Sinkin MV. An algorithm for detecting events in video EEG monitoring data of patients with craniocerebral injuries. Computer Optics 2021; 45(2): 301-305. DOI: 10.18287/2412-6179-CO-798.

Acknowledgements: The work was carried out within the framework of the state task and partially was supported by the Russian Foundation for Basic Research, the project No 18-29-02035.

Introduction

The development of post-traumatic epilepsy is one of the most common consequences of traumatic brain injury. Vid-eo-electroencephalographic (Video EEG) monitoring is used to confirm epilepsy, control the course of the disease and the effectiveness of the therapy, as well as to diagnose convulsive and non-convulsive seizures. Synchronized video recording of the patient's clinical condition and bioelectric activity of the brain (i.e., EEG) can reliably diagnose epileptic seizures and differentiate them with non-epileptic events. The analysis of publications in periodical literature and monographs in the studied subject area, carried out by the authors, showed that there are very few publications on methods for automatically detecting epileptic seizures by video sequences obtained during video-EEG monitoring. Currently, several methods have been proposed for the automatic detection of seizures from EEG data [1 - 5]. In [6, 7], the authors proposed an algorithm for automatic detection of seizures based on the analysis of quantitative characteristics of facial expressions in video sequences. In a video sequence using the magnitude of the optical flow, a group of frames with high scene dynamics is detected. The algorithm is developed to detect two types of diagnostic events. The first type of event is observed when patients are in a coma. The second type can be fixed in the form of fading for sev-

eral seconds for active patients. The proposed algorithm showed that the detected events quite accurately coincided with the events detected by the analysis of wavelet spectrograms of the EEG channel, proposed in [4] when analyzing Video EEG monitoring data. However, the study of only the data from the video channel does not allow one to distinguish between activity due to the movement of the patient and the activity generated by the seizure.

An important task of analyzing Video EEG data is to differentiate epileptiform activity from chewing artifacts. The method presented in [4] does not allow this.

In [5], a method for finding epileptic seizures and artifacts of chewing in electroencephalographic signals, based on the analysis of their wavelet spectrograms and the parameters of the ridges of wavelet spectrograms, was proposed. It was found that using the frequency maximum value and the arithmetic mean deviation of the frequency of the ridge fragments of the wavelet spectrogram the event can be attributed to an epileptic seizure or to an artifact of chewing. It was shown that at frequencies from 3.5 to 6 Hz of the Fourier spectra of sections of wavelet spectrograms, the spectrum peak frequency for an epileptic seizure is almost three times higher than for the chewing. The half-width of the Fourier spectra of sections of EEG wavelet spectrograms at a cutoff frequency above 3.5 Hz for chewing artifacts is 1.5 - 3 times greater than

the half-width of an epileptic seizure Fourier spectra. These values are used as features by which one can differentiate an epileptic seizure from a chewing artifact. However, this method cannot distinguish seizures from artifacts associated with patient movement. To increase the reliability of differentiation, it is necessary to conduct a synchronous analysis of video sequences and wavelet spectrograms of the EEG.

In this paper, we propose an algorithm for synchronous analysis of video sequences and EEG signals, based on a combination of previously developed methods described in [4, 6, 7], which allows differentiating an epileptic seizure from artifacts caused by chewing and moving. The proposed algorithm is capable of detecting two types of diagnostic events in video EEG data taken from patients with brain injury.

1. Event detection in the video channel of video EEG monitoring data

The algorithm proposed in [6, 7] is associated with an analysis of the dynamics of informative areas of interest associated with the patient's face, head, and neck. This study addresses a more general case where an informative area contains a whole image of the patient. It should be noted that the frames of video sequences taken from video EEG monitoring data have the following features. Firstly, an arbitrary aspect angle of the patient's video recording. Secondly, medical equipment may partially occlude the patient. Thirdly, the possibility of the appearance of medical personnel or other patients in the frame. When analyzing video sequences, it is necessary to detect the following events: (a) an epileptic seizure; (b) patient movement (e.g., changing posture, moving around the room); (c) chewing (movement of the face, specific, for example, for the eating).

As a measure of activity J (i) in the region of interest, we will use the total value of the optical flow calculated for each frame of the video sequence [8], where i is the frame number. Since the noise component is present in the function J (i) (see fig. 1), when detecting events, it is necessary to use a smoothed value of the activity index J(i). For smoothing, a discrete version of the Kalman-Bucy filtering algorithm is used [9], since it provides an optimal estimate in the sense of error variance minimum. Each of the diagnostic and artifact events is characterized by a certain range of levels of a smoothed value of the activity measure J(i). The decision on the result of event recognition will be made according to a threshold rule. To exclude false positives of the detector due to short-term jumps of the optical flow, a decision on the occurrence of an event will be made if the value of J(i) exceeds a predetermined threshold in a sequence of at least M frames. Thus, the decision rule will be formulated in the following form:

Eventi —

[i, if J (i ) > Ti and i - i0 > M ; [o, if J (i )< T or i - i0 < M,

(l)

where Eventi is an indicator of the event; Ti is the threshold; io is the number of the frame starting from which the inequality holds; M is the length of the sequence of frames required for making a decision about the presence of a diagnostic event. The threshold value is defined as

Ti = Jo + kiCi,

(2)

where J0 is calculated as the mean value of J(i) in a fragment of a video sequence with low scene dynamics, cti is the standard deviation of the value of J(i), ki is the coefficient.

Events of another type are manifested in the behavior of active patients in the form of fading for several seconds. In this case, it is proposed to detect events also using the value of the activity measure. In contrast to the case considered above, the appearance of an event corresponds to a minimum of the activity measure. The decision rule takes the following form:

Event2 —

[i, if J (i ) < T2 and i - i0 > M ; lo, if J (i )> T2 or i - i0 < M,

(3)

where Event2 is an indicator of the event, and the threshold value is calculated as follows:

T2 — J0 kjCj

(4)

where CT2 is the standard deviation of the value of J (i)

and fe is the coefficient.

Thus, the algorithm for recording events in the video channel of video EEG monitoring data includes the following operations.

1. Read frame number i from the video sequence data.

2. Calculate the total optical flow from adjacent frames of the video sequence and normalize it by the frame area.

3. Calculate the value of the smoothed activity measure J (i).

4. Check conditions (I - 4). If the conditions J (i) > Ti or J (i) < T2 are satisfied, then save in memory the number of the current frame as io = i. If the condition is not satisfied, then set go to step I.

5. Repeat steps I to 4. If the conditions J (i)> Ti or

J (i) < T2 and i - io >M are satisfied, then a decision about

the detection of an event is made. Otherwise, go to step I.

It should be noted that a moving artifact with a sufficiently high level of J (i) will be detected as a seizure. Therefore, to differentiate diagnostic and artifact events, a synchronous analysis of video record and EEG signals is necessary.

2. Event detection in EEG signals

In appendix to [10] it was shown that for a signal S(t) = A (t) exp (i ® (t)) when the amplitude A(t) > 0 exhibits relatively slow variations compared to the fast vari-

302

Computer Optics, 202i, Vol. 45(2) DOI: i0.i8287/24i2-6i79-C0-798

ations of the phase ® (t) and complies with the asymptotic properties [11], the following expressions are valid:

A(t) «\W(t, f (t))|, and

( ImW (t, f (t))

®(t) «arctanl-v ,JrK "

^ ReW (t, f (t ))

if ®S (t) « 2 fr 2(t ),

(5)

where W (t, f (t)) = max \W (t, f (t))\ is the ridge of Morlet wavelet transform W, fr is the ridge frequency, t is a time.

EEG signals are pre-filtered by a 25 Hz notch filter and a second-order Butterworth filter with a passband from 0.5 to 22 Hz. Detection of specific events in EEG signals is carried out with the help of the ridge wavelet spectrograms power spectral density PDS = \W (t,fT)\2 analysis [4].

The decision rule for fixing the event is as follows:

Event3 =

1, if PSD > T u®S (t) « 2 f 2(t); 0, if PSD < T u®S (t) « 2 f 2(t),

(6)

where Event3 is the indicator of the event, T3 is the threshold value of wavelet ridge PSD, which can be find from PSD in time intervals without events. Epileptic seizures and a myographic artifact of chewing are characterized by a comparatively high level of PSD. Therefore, to increase the accuracy of detection of diagnostic events, a synchronous analysis of the video channel is required.

3. Event detection using the synchronous analysis of video-EEG monitoring data

Each of the diagnostic and artifact events is characterized by a certain range of levels of the smoothed value of the activity measure J (i) obtained from the video channel and the power spectral density PSD of the ridge points. In this case, the decision rules can be arranged in Table 1 according to the values of the indicators Eventj, j = 1, 2, 3 obtained from (1 - 6) during the synchronous analysis of video-EEG monitoring data.

Table 1. Decision rules for detecting diagnostic and artifact events in the analysis of video EEG monitoring data

Event Indicators

Eventi Event2 Event3

Seisure 1 0 1

Seisure 0 1 1

Chewing 0 0 1

Moving 1 0 0

In the next section, we present the results of the experiment conducted to test the proposed algorithm.

3. Computational experiment

To confirm the effectiveness of the developed algorithm, a computational experiment was conducted using the data of video-EEG monitoring obtained in clinical conditions. The developed algorithm is implemented in the MatLab software environment. The optical flow we used as a measure of the patient's activity J (i) is calculat-

ed by the Lucas-Kanade algorithm [8]. This algorithm is chosen from the condition of the highest performance in comparison with other techniques. The value of the smoothed activity measure J (i) is determined using the discrete version of the Kalman-Bucy filtering algorithm [9]. The values of the parameters of the filtering algorithm are selected based on the analysis of test video sequences providing the best error-speed ratio. The values of J (i) and J (i) are normalized by the area of region of interest. In the experiment, we applied the detection algorithm to long-term video EEG records of five patients. The video channel data were analyzed together with the data from three EEG channels selected at the preliminary analysis stage. We analyzed 43 events. Ten of them correspond to epileptic seizures, thirteen are associated with food intake, and twenty - with the patient's moving.

The following results are obtained. Depending on the selected EEG channel, from 35 to 37 events were correctly detected, which amounted from 81.4 to 86 percent. At the same time, seizures were correctly localized from 8 to 10 times, chewing artifacts - from 11 to 13, and events caused by the movement of patients - from 14 to 16 times.

Fig. 1 shows graphs of a normalized measure of activity, a normalized smoothed measure, and indicators of events Event1 and Event2 for a fragment of a video on which an epileptic seizure is recorded. Fig. 2 shows the projection of the ridge of the EEG wavelet spectrogram in the time-power spectral density axes corresponding to the same fragment of the video record. The grey on the graph indicates the intervals at which the Event3 indicator takes the value 1.

J(t), J(t), Event 1, Event2

0.25-

200 Time (s)

Fig. 1. Illustration of localization of the seizure in the video sequence of video EEG monitoring: graphs of normalized measure J(t), filtered normalized measure J(t) , and event indicators Eventi and Event2

From fig. 1 and fig. 2 it follows that an epileptic seizure is reliably detected in the video record and the wavelet spectrogram of the EEG signal according to the rule presented in Table 1.

Fig. 3 and fig. 4 show the results of the analysis of the video channel and the T6-O2 channel of the EEG for a fragment of the video EEG data, on which the patient's food intake is recorded. In the video channel, the Eventi indicator on the whole fragment takes zero value, and the Event2 indicator takes the value 1 in the interval between

32 and 4o seconds. At the same time, the Event3 indicator takes a value of I between 0 and 8 seconds, as well as at several intervals after 120 seconds. In this case, following Table I, the chewing artifact is fixed at intervals where the Event3 indicator is zero.

PSD(\iV2/Hz), xipo

150 180 Time (s)

Fig. 2. The projection of the ridge of the wavelet spectrogram in the time-spectral power density axes corresponding to the graphs shown in Fig. 1

J(t), J(t), Eventl, Event2 0.10\

0.08 0.06 0.04 0.02

0

50

150 200 Time 0)

Fig. 3. Illustration of the localization of the chewing artifact by the video sequence of the video EEG monitoring: graphs of the normalized measure J(t), the filtered normalized measure J(t) , and indicators Eventi and Event2

Conclusions

As a part of the development of technology for detecting epileptic seizures and differentiating epileptic and artifact events according to video EEG monitoring data, an algorithm for automatic detection and recognition of events is proposed. The algorithm is based on the analysis of the quantitative characteristics of video frames and EEG wavelet spectrograms. The analysis of video sequences is focused on identifying a group of frames with high and low scene dynamics according to a measure calculated as the magnitude of the optical flow. The preliminary results of the analysis of real clinical data are presented. The results of the analysis showed the efficiency of the proposed algorithm for differentiating epileptic seizures from moving and chewing. Further research will be aimed at combining EEG channels and applying patient tracking techniques like [12] for improving the reliability of the proposed algorithm.

PSD(\iV2/Hz), xlO7

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

j.jt 1 '

3 i

2.5

2.o\

1.5

i.o\ 0.5-

o-

30

60

90

120

y

150 180 Time (s)

. The projection of the ridge of the wavelet spectrogram the time-power spectral density axes corresponding to the graphs shown in Fig. 3

References

Hirsch L, Brenner R. Atlas of EEG in critical care. John Wiley and Sons Inc; 2010.

Tzallas AT, Tsipouras MG, Fotiadis DI. Automatic seizure detection based on time-frequency analysis and artificial neural networks. Comput Intell Neurosci 2007; 2007: 80510. Antsiperov VE, Obukhov YV, Komol'tsev IG, Gulyaeva NV. Segmentation of quasiperiodic patterns in EEG recordings for analysis of post-traumatic paroxysmal activity in rat brains. Pattern Recognit Image Anal 2017; 27(4): 789-803. Obukhov K, Kershner I, Komol'tsev I, Obukhov Y. Epi-leptiform activity detection and classification algorithms of rats with post-traumatic epilepsy. Pattern Recognit Image Anal 2018; 28(2): 346-353.

Kershner IA, Sinkin MV, Obukhov YV. A new approach to the detection of epileptiform activity in EEG signals and methods to differentiate epileptic seizures from chewing artifacts [In Russian]. RENSIT 2019; ii(2): 237-242. DOI: I0.i7725/rensit.20l9.ll.237.

Murashov D, Obukhov Yu, Kershner I, Sinkin M. A technique for detecting diagnostic events in video channel of synchronous video and electroencephalographic monitoring data. CEUR Workshop Proc 2019; 239i: 285-292. Murashov D, Obukhov Y, Kershner I, Sinkin M. Detecting events in video sequence of video-eeg monitoring. ISPRS International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 2019; XLII-2/W12: i 55-i 59. DOI: i0.5l94/isprs-archives-XLII-2-Wi2-i55-20l9.

Lucas BD, Kanade T. An iterative image registration technique with an application to stereo vision. Proceedings of Imaging Understanding Workshop i98i: I2i-i30. Kalman RE, Falb PL, Arbib MA. Topics in mathematical system theory. New York: McGraw-Hill; 1969.

[10] Tolmacheva RA, Obukhov YV, Polupanov AF, Zhavor-onkova LA. New approach to estimation of interchannel phase coupling of electroencephalograms. J Commun Technol Electron 2018; 63(9): 1070-1075.

[11] Guilleemain P, Kronland-martinet R. Characterization of acoustic signals through continuous linear time-frequency representations. Proc IEEE i996; 84(4): 56i-585

[12] Bohush RP, Zakharava IY. Person tracking algorithm based on convolutional neural network for indoor video surveillance [In Russian]. Computer Optics 2020; 44(i): i09-ii6. DOI: i0.i8287/24i2-6i79-CO-565.

Fig. 4.

in

[1] [2]

[3]

[4]

[5]

[6] [7]

[8]

[9]

304

Computer Optics, 2021, Vol. 45(2) DOI: 10.18287/2412-6179-CO-798

Authors' information

Dmitry Mikhailovich Murashov (b. 1958), graduated from Moscow Aviation Institute in 1981 with specialty "Automatic Control Systems." Received PhD degree in Engineering in 1990. Currently he works as the senior researcher at the Federal Research Center "Computer Science and Control" of Russian Academy of Sciences. Research interests are image processing, image analysis, and pattern recognition. E-mail: d_murashov@mail.ru .

Yury Vladimirovich Obukhov, (b. 1950), graduated from Moscow Institute of Physics and Technology in 1974, majoring in Applied Mathematics and Physics. Received PhD degree in 1982, and Dr. Sci. degree in 1992 with specialty "Experimental Physics". Currently he works as the chief scientist, head of laboratory of "Biomedical Informatics" at the Kotel'nikov Institute of Radio Engineering and Electronics of Russian Academy of Sciences. Research interests are biomedical engineering and informatics, E-mail: yuvobukhov@mail.ru .

Ivan Andreevich Kershner, (b. 1992), graduated from Moscow Institute of Physics and Technology in 2016, majoring in Applied Mathematics and Physics. Currently he works as the junior researcher at the Kotel'nikov Institute of Radio Engineering and Electronics of Russian Academy of Sciences. Research interests are mathematics, computational methods, computer science, and signal processing methods. E-mail: ivan_kershner@mail. ru.

Mikhail Vladimirovich Sinkin, (b. 1977), graduated from Moscow State University of Medicine and Dentistry in 2000, as Doctor of Medicine. He currently works as a senior researcher and head of neurophysiology laboratory of Sklifosovski Research Institute of Emergency Care. His research interest lies in the field of critical care neurophysiology and intraoperative neurophysiology monitoring (IONM). E-mail: mvsinkin@gmail.com .

FPHTH: 28.00.00 Received August 19, 2020. The final version - February 19, 2021.

i Надоели баннеры? Вы всегда можете отключить рекламу.