Научная статья на тему 'APPLICATION OF FREQUENCY FEATURES OF OPTICAL FLOW FOR EVENT DETECTION IN VIDEO-EEG MONITORING DATA'

APPLICATION OF FREQUENCY FEATURES OF OPTICAL FLOW FOR EVENT DETECTION IN VIDEO-EEG MONITORING DATA Текст научной статьи по специальности «Медицинские технологии»

CC BY
57
7
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
VIDEO-ELECTROENCEPHALOGRAPHIC MONITORING / OPTICAL FLOW / PERIODOGRAM / WELCH'S METHOD / CLUSTERING / CLASSIFICATION

Аннотация научной статьи по медицинским технологиям, автор научной работы — Murashov Dmitry, Obukhov Yury, Kershner Ivan, Sinkin Mikhail

The work is devoted to the study of the frequency features of the optical flow obtained from the video record of long-term video-electroencephalographic (video-EEG) monitoring data of patients with epilepsy. It is necessary to obtain features to recognize epileptic seizures and differentiate them from non-epileptic events. We propose to analyze the periodograms of the smoothed optical flow computed from the fragments of the patient’s video recordings. We use Welch's method to obtain periodograms. The values of the power spectral density of the optical flow at the selected frequencies are used as features. Using the clustering algorithm, seven groups of events are identified in video recordings and combined into three generalized classes. We train SVM classifier and conduct recognition of events in a test sample of 103 video fragments in four patients. The experiment indicates the accuracy of event classification equal to 90.3%.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «APPLICATION OF FREQUENCY FEATURES OF OPTICAL FLOW FOR EVENT DETECTION IN VIDEO-EEG MONITORING DATA»

Application of Frequency Features of Optical Flow for Event Detection in Video-EEG Monitoring Data

Dmitry Murashov1*, Yury Obukhov2, Ivan Kershner2, and Mikhail Sinkin3,4

1 Federal Research Center "Computer Science and Control" of RAS, 44 Vavilova str., Moscow 119333, Russia

2 Kotel'nikov Institute of Radio Engineering and Electronics of RAS, 11 Mokhovaya str., Moscow 125009, Russia

3 N. V. Sklifosovsky Research Institute for Emergency Medicine of Moscow Healthcare Department, 3 Bolshaya Sukharevskaya Square, Moscow 129090, Russia

4 A. I. Yevdokimov Moscow State University of Medicine and Dentistry, 9A Vucheticha str., Moscow 127473, Russia * e-mail: d murashov@mail.ru

Abstract. The work is devoted to the study of the frequency features of the optical flow obtained from the video record of long-term video-electroencephalographic (video-EEG) monitoring data of patients with epilepsy. It is necessary to obtain features to recognize epileptic seizures and differentiate them from non-epileptic events. We propose to analyze the periodograms of the smoothed optical flow computed from the fragments of the patient's video recordings. We use Welch's method to obtain periodograms. The values of the power spectral density of the optical flow at the selected frequencies are used as features. Using the clustering algorithm, seven groups of events are identified in video recordings and combined into three generalized classes. We train SVM classifier and conduct recognition of events in a test sample of 103 video fragments in four patients. The experiment indicates the accuracy of event classification equal to 90.3%. © 2021 Journal of Biomedical Photonics & Engineering.

Keywords: video-electroencephalographic monitoring; optical flow; periodogram; Welch's method; clustering; classification.

Paper #3432 received 25 May 2021; revised manuscript received 12 Jul 2021; accepted for publication 14 Jul 2021; published online 30 Sep 2021. doi: 10.18287/JBPE21.07.030301.

1 Introduction

In clinical practice, video-electroencephalographic monitoring, a method of long-term synchronous recording of an electroencephalogram (EEG) and video image, has become widely used. Simultaneous video recording of the patient's clinical state and the bioelectrical activity of the brain makes it possible to reliably diagnose epileptic seizures and differentiate them from non-epileptic events [1, 2]. For recording and visual analysis of video-EEG data, physicians use specialized software (for example, Galileo NT Line package), which has a set of functions for signal processing and analysis, as well as statistical data analysis. If diagnostically important fragments of the EEG are found, the physician needs to revise the area of interest in the video recording for visual assessment and differentiation of an epileptic and artifact event. Visual analysis of video data is extremely laborious [21];

therefore, it becomes necessary to develop methods for automatic registration of epileptic seizures from video sequences obtained during video-EEG monitoring.

Some works on the analysis of video recordings for the detection and recognition of epileptic events are known. The work [3] presents methods for measuring the motion strength and motor activity of newborns using video recording. The quantitative characteristics obtained in the form of signals are used to differentiate myoclonic and clonic seizures and to distinguish seizures from the normal behavior of the newborn. The motion strength is defined as the area of the moving parts of the infant's body. To outline such fragments, the wavelet transform of frames, median filtering, and segmentation operations using an adaptive version of the k-means algorithm are used. The change in time of the coordinates of characteristic points, selected automatically on the child's limbs and tracked using the KLT (Kanade-Lucas-Tomasi) algorithm, generates signals that characterize

motor activity. Modifications of methods for measuring motion strength and motor activity are presented in the works [4, 5].

In the works [6, 7], the authors solved the problem of real-time detection of clonic seizures in newborns by the sequence of images obtained from video cameras. The value of the filtered average optical flow calculated as the sum of the binarized pixel-by-pixel difference in luminance of adjacent frames of the video sequence is analyzed. In Ref. [6], a feature of a seizure is the periodicity of the optical flow, which is detected using a hybrid autocorrelation-YIN estimation technique. In Ref. [7], the Maximum Likelihood criterion is used to determine the periodicity of the optical flow.

The paper [8] presents an algorithm for recognizing convulsive seizures in real-time from a sequence of frames recorded by a video camera. To analyze video images and detect events, the method described in Ref. [9] is used. The method consists in computing the components of the optical flow associated with group transformations of objects in frames (translation, rotation, dilatation, shear) and band-pass temporal filtering to identify an occurrence of clonic movements.

In the works [10, 11], the authors analyze video sequences recorded by multiple cameras to detect nocturnal motor seizures and central apneas occurring in the aftermath of epileptic seizures. The characteristics of the sigmoid patterns of the time-frequency spectrum (modulation maximum amplitude and total spectral power modulation at the time of the event) in the range of 0.1-1 Hz of translation, dilatation, and shear rates are used as features of a diagnostic event. These velocities are calculated by the method described in Ref. [9] from the optical flow generated by the patient's movements.

Previously, the authors of studies [12, 13] proposed to detect events in a video recording by the magnitude of the optical flow, which characterizes the degree of mobility of the frame area in which the patient is located. The algorithm was designed to detect both convulsive and non-convulsive seizures. The tests showed that the detected events quite accurately coincided with the events detected during the analysis of the EEG wavelet spectrograms. In Ref. [14], an algorithm for the synchronous analysis of the EEG signal and video recording for the detection and differentiation of diagnostic and artifact events was proposed. The algorithm combines a threshold detector of brain activity using the ridges of wavelet spectrograms [15] and a threshold detector of events in terms of optical flow [12, 13]. The results of the analysis of clinical data recorded on EBNeuro equipment and Galileo NT Line package software showed the fundamental possibility of reliable distinguishing artifact events from epileptic seizures [14]. However, for more reliable recognition of epileptic seizures, additional features are needed.

The study of publications has shown that frequency analysis of signals obtained from an optical flow is widely used to extract features of epileptic seizures. This work aims to study the frequency features of the optical flow of video recording for the recognition of diagnostic

and artifact events during the synchronous analysis of video-EEG data. Unlike the well-known works, the proposed technique is not restricted to a specific seizure type, and this study will consider a wider range of events recorded on video. We propose to analyze the periodograms of the smoothed optical flow, computed from the fragments of the video recording of the patients. Since this work does not consider the multidirectional components of the optical flow, the classical method of spectral analysis of signals will be used to obtain periodograms. As features, we will use the values of the power spectral density (PSD) at the selected frequencies.

2 Materials and Methods

As mentioned above, this work aims to obtain and study the features for recognizing epileptic seizures in the data of long-term video-EEG monitoring and their differentiation with non-epileptic events. In this study, we used video recordings from long-term video EEG monitoring data of seven adult patients with an active level of wakefulness. The recordings were obtained with an HD camera using the Galileo NT Line package. The camera is fixed to the ceiling of the hospital room. The duration of the recordings is from 5 to 24 h or more. Each video consists of files in AVI format containing a fragment of three minutes in length.

We will recognize events based on the analysis of the patient's movement recorded by a video camera. To measure the intensity of the patient's movement, we use the value of the smoothed total optical flow calculated in the region of interest in each frame of the video sequence. The optical flow is computed by the Lucas-Kanade method. To smooth the activity measure, we use a discrete version of the Kalman-Bucy filtering algorithm.

We form the feature descriptions of events recorded on video based on the frequency analysis of the smoothed measure of patient's activity. To compute periodograms, we use Welch's method with a Hamming window of three sizes. We propose to use the values of the power spectral density of a measure of the patient's mobility at 14 selected frequencies as the features of events. To analyze the obtained feature descriptions of events and study the possibility of using pattern recognition algorithms for detecting events, we use a clustering algorithm with automatic choosing the optimal number of clusters. We use the Support Vector Machine (SVM) classifier to recognize events in video recordings. The classifier was trained on 78 three-minute video fragments of three adult patients with an active level of wakefulness. For testing, we used 103 video fragments of the other four adult patients.

In the following sections of the article, we describe in detail the application of the listed above methods and data in our research.

2.1 A measure of patient's activity

Analysis of publications in the domain of detecting seizures from video sequences showed that the most

common approach is based on the analysis of optical flow. In works [13, 14] we proposed to detect diagnostic events using the measure, characterizing the degree of activity of the region of interest. The region of interest is the part of the frame where the patient is located. A measure of the activity of the region of interest is the total value of the optical flow calculated for each frame of the video sequence:

/(#) = " S")1 Z?t1 '($%(),+,#) + (&%(),+,#) + S(n). (1)

where J (n) is the value of activity measure calculated in the frame number n; W is the width and H is the height of the region of interest in pixels; Vx(x,y,n), Vy (x,y, n) are the optical flow values in axial directions X and Y in the frame number n at a pixel with coordinates (x, y); S(n) is noise. The measure of activity J characterizes the intensity of movement of objects (patients) in the region of interest in frames of a video recording. To compute V (x,y,n) and Vy (x,y,n) in Eq. (1), we applied the Lucas-Kanade algorithm [16]. Two examples of seizure and food intake events from video-EEG data of two patients and computed optical flow vector fields are shown in Fig. 1 (a) and (b).

(a)

(b)

Fig. 1 Frames fixing seizure (a) and food intake (b) events from video-EEG data of two patients and computed optical flow vector fields (shown as arrows). The size of the shown region of interest is 800 by 570 pixels, and the size of the corresponding field of view is about 120 by 90 cm.

As far as the noise component is present in model (1), we use the smoothed value of the activity measure J(n) to detect events. The smoothed J(n) value is obtained using a discrete version of the Kalman-Bucy filtering algorithm [17]. We apply the Kalman-Bucy algorithm because it provides the optimal estimate in the sense of minimum error variance. The graphs of the values of measure of activity J(n) and smoothed measure of activity J(n) obtained from a real video recording are shown in Fig. 2. In this video from 126 to 180 sec, an epileptic seizure is captured. The decision to fix a diagnostic event is made according to the threshold rule. To increase the reliability of event detection, it is necessary to take into account not only the amplitude but also the frequency characteristics of the measure of activity. In the next subsection, we will present a technique for extracting the frequency features of the smoothed measure of patient's activity J (n).

Fig. 2 Graphs of measure of activity J and smoothed measure of patient's activity J obtained from a video recording of an epileptic seizure. The measure of activity J (as well as the smoothed measure J) characterizes the intensity of movement of patients in the region of interest in frames of a video recording.

2.2 Extracting frequency features of events

To construct periodograms, we use the Welch method with a Hamming window with 50% overlap [18]. The analyzed video data are presented in the form of three-minute fragments of the patient's video recording which fix various events and their combinations: seizures, sleep, movement, food intake. The frame (sample) rate is equal to 20 frames per second. The sequence of samples of the value of the smoothed activity measure J(0),...J(n)...J(N — l) with an interval of T = 0.05 sec is divided into P segments of D samples in each with a shift of S,S < D samples between adjacent segments. The estimate of the Welch's periodogram in the frequency range — — < f < — is determined by the following expression:

P& = ' X'-O P/f (f), (2)

where P^ is the spectrum of the weighted segment

Pf=\*{vrn2.

(3)

where X(p)(f) is the discrete-time transformation of the segment U is the energy of the window w.

Using formulas (2) and (3), we obtained periodograms of different frequency resolutions at three window sizes: 10, 200, and 600 samples. Fig. 3 shows the examples of periodograms computed at window size equal to 200 from video records of different events in one of the patients. We studied 78 fragments of video recordings of three patients that fixed the following events: epileptic seizure, intense movement in the frame, sleep, rest, smooth movement, food intake. Fig. 3 shows that different events take different values of the power spectral density at a particular frequency value. To study the structure of the data that contain the obtained spectral characteristics of the optical flow, we use the technique of cluster analysis. The application of the clustering algorithm to distinguish groups of events by levels of power spectral density of periodograms obtained from video recordings of different events is described in the next subsection.

2.3 Partitioning frequency feature space

To study the possibility of using classifiers for detecting events, we applied a clustering algorithm. The algorithm is based on the search for locally optimal data partitions with automatic selection of the optimal number of clusters [19]. For optimization, the criterion for the minimum of the sum of intraclass variances is used:

tlo = Dk ! J Z-im=l-"

G

m'

(4)

/4 = |Gm| ^XjeGm p(Fi'xm)2>

where x4 is the center of group Gm, to which the object x8 is assigned; p(x8,x4) is the distance from object

x8 e Gm to the center of group x4 ; /4 is the intraclass variance. Here, x8 e R< , xcm e R< , where N is the dimension of the feature space.

The optimal number of clusters k in the range a—1<k<b + 1 is determined from the condition for the maximum of the functional

(5)

= Y A. + Aa-l ■ Z.,2k-i 2k-a '

(6)

b-l

= Y ^ Ab Tk 28-k+l + 2b-k i=k

where Ai =/f+°i -Jf3.

Fig. 3 Optical flow periodograms of recordings of various events for one of the patients computed at window size D = 200.

Table 1 Relative distances between cluster centers. Distances between the closest cluster centers are shown in bold.

Events No 1 2 3 4 5 6 7

Eating 1 0 0.43 0.21 0.64 0.17 0.55 0.46

Seizure 2 0 0.55 1.0 0.55 0.95 0.19

Smooth movement 3 0 0.46 0.11 0.49 0.53

Sleep+movement 4 0 0.49 0.42 0.96

Rest after seizure 5 0 0.44 0.55

Sleep 6 0 0.98

Movement 7 0

O

k

In the study, we used a set of feature descriptions of 78 fragments of video recordings of events taken from the data of 24-h video-EEG monitoring of three adult patients with an active level of wakefulness. The feature space is formed from the PSD values of the periodograms of the optical flow obtained with three window sizes (see Section 3). The PSD values were taken at 14 frequencies selected in the range from 0.5 to 8.8 Hz. Thus, the dimension of the composited feature vectors is equal to 42. The optimal number of clusters was chosen in the range from 2 to 8.

The clustering algorithm described by formulas (4) - (6) partitioned the data extracted from the periodograms into seven groups of events characterized by similar PSD values at the selected frequencies: (1) food intake and combination of food intake and movement ("Eating"); (2) epileptical seizures ("Seizure"); (3) smooth movement ("Smooth movement"); (4) sleep and smooth movement ("Sleep + movement"); (5) rest and smooth movement after seizure ("Rest after seizure"); (6) sleep ("Sleep"), and (7) intensive movement ("Movement").

Table 1 gives the relative distances between the centers of the obtained clusters.

The results given in Table 1 show that the clusters of feature descriptions of several types of events are well-separated in the feature space, and the centers of several clusters are located relatively close. For example, the centers of the clusters "Seizure" and "Sleep", "Smooth movement" and "Movement" are significantly distant, while the centers of the clusters "Smooth movement" and "Rest after seizure", "Eating" and "Rest after seizure", as well as "Seizure" and "Movement" are located relatively close, since the nature of the patient's movements in these groups of events may be similar. Therefore, we combined closely spaced clusters and formed the following three generalized classes. The first class, "Seizure/Movement", includes epileptic seizures and episodes of intense movement in the region of interest (clusters of events "Seizure" and "Movement"). The second class, "Sleep", includes the patient's sleep and state of rest (clusters of events "Sleep" and "Sleep + movement"). The third class, "Smooth movement", includes events associated with the movements of low intensity. This class combines episodes of eating, smooth posture changes, and working

with a smartphone or laptop ("Eating", "Smooth movement", and "Rest after seizure" event clusters). Thus, the frequency features extracted from the video recordings of long-term monitoring of patients made it possible to distinguish three types of patterns that characterize generalized classes of events.

In the next section, to confirm the possibility of detecting events using frequency features, we will train the SVM classifier and recognize events from the test set of fragments from long-term video recordings of patients with epilepsy.

3 Results

We used the SVM algorithm [20] to classify events. A set of 78 feature descriptions of video recordings of events, described in Section 4, was used to train the classifier. Events are grouped into three generalized classes. In the SVM algorithm, we used a potential function in the form of a dot product. The leave-one-out cross-validation procedure was used for training. The results of training and quality measure values are given in Table 2.

Feature descriptions of 103 three-min fragments of video recordings of long-term monitoring in four patients formed the test sample. The analyzed video fragments captured events given in Table 1 and their combinations, including seven seizures and five events of food intake. It should be noted that one event could be recorded in several consecutive three-minute video fragments. Table 3 presents the results of testing and values of recognition quality measures.

4 Discussion

The relatively low percentage of correctly recognized events from the "Seizure/Movement" class can be explained in the following way. The camera recorded most of the epileptic seizures from the test sample on several consecutive 3-min videos. Some records contain only the initial or final seizure stages. The duration of these stages is significantly shorter than three minutes. The size of the window used in calculating the estimates of periodograms by formulas (2), (3) is much less than the number of frames in a three-minute video

Table 2 Results of training SVM classifier.

Classes Classified objects True False From class 1 From class 2 From class 3

Seizure/Movement 21 19 2 19 0 2

Sleep 31 29 2 0 29 2

Smooth movement 25 20 5 3 2 20

Reject 1 0 0 1

Total 78 (100%) 68 (87.2%) 9 (11.2%) 22 31 25

Precision 6 90.5% 93.5% 80%

Recall 7 86.4% 93.5% 80%

Table 3 Results of recognition of a test sample.

Classes Classified objects True False From class1 From class 2 From class 3

Seizure/Movement 18 18 0 18 0 0

Sleep 26 24 2 1 24 1

Smooth movement 59 51 8 6 2 51

Total 103 (100%) 93 (90.3%) 10 (9.7%) 25 26 52

Precision 100% 92.3% 86.4%

Recall 72% 92.3% 98%

Specificity 100% 77% 84%

Negative predictive value 92% 97% 98%

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

F1-score 83.7% 92.3% 91.8%

Table 4 Performance of various event detection algorithms [6-11]. Algorithm/reference Task Precision

Recall

Negative

(sensitivity) Specificity predictive value

Kouamou Ntonfo et al. (2012) [6]

Cattani

et al. (2017) [7]

Geertsema et al. (2018) [8]

Kalitzin et al. (2012) [9]

Geertsema et al. (2020) [10]

van Westrhenen et al. (2020) [11]

Detection of neonatal clonic seizures

Apneas detection Clonic seizure detection

Recognition of convulsive seizures

Segmentation of clonic seizures

Detection of central apneas

Detection of nocturnal motor seizures in children

27-64%

60-93%

90-100% 76-92%

57-100% 95% >90% 94%

67-86%

78-83% 88-96%

>99%

93-96%

recording, and the spectra of weighted segments (3) can vary significantly. In this case, the frequency pattern of the three-minute fragment, characterized by the estimate (2), may be deformed and assigned to the non-proper class. It should be noted that in the considered test set of video recordings, all epileptic seizures captured in several consecutive three-minute fragments are detected correctly in at least one of the fragments. In general, the percentage of correctly classified events is equal to 90.3% of the total.

Table 4 shows the characteristics of event recognition obtained by analyzing video sequences of video-EEG monitoring data by different methods presented in the works that we considered in Section 1. From the data in Tables 3 and 4 and taking into account the above remarks, it follows that the results of event detection (precision, recall, and specificity) obtained in this work correspond to the performance of the algorithms proposed in Refs. [6-11].

It is possible to increase the accuracy of event classification by reducing the duration of the analyzed video fragments and augmenting the training sample of events. Differentiation of seizures and artifacts of

chewing and movement will be carried out in the synchronous analysis of video and EEG recordings.

5 Conclusions

We studied the possibility of using the frequency characteristics of the video recordings for analyzing long-term video-electroencephalographic monitoring data. Periodograms of the smoothed optical flow, calculated from the fragments of video recordings, which captured various events, were computed. To obtain periodograms, we used Welch's method with the Hamming window of three sizes. The values of the spectral power density of the optical flow at fourteen selected frequencies are used as features of events. We applied the clustering algorithm with automatic selection of the optimal number of clusters to obtain optimal data partition. The preliminary results of the frequency analysis of the optical flow, computed from the video-EEG monitoring data of patients, were obtained. Seven groups of events were identified in the feature space using the clustering algorithm. These groups were combined into three generalized classes. To confirm the possibility of detecting events by frequency

characteristics of video recordings, we trained an SVM classifier and conducted recognition of events in a test sample of 103 video fragments in four patients. The percentage of correctly classified events is equal to 90.3% of the total. Further research will be aimed at improving the accuracy of detecting events in video recordings of patients by frequency features and developing an algorithm for synchronous analysis of video-EEG monitoring data.

Disclosures

All authors declare that there is no conflict of interests in this paper.

Acknowledgements

This research was carried out within the framework of the state task and was partially supported by the RFBR, grant No 18-29-02035.

References

1. M. Patel, P.Satishchandra, J. Saini, R. D. Bharath, and S. Sinha, "Eating epilepsy: Phenotype, MRI, SPECT and video-EEG observations," Epilepsy Research 107(1-2), 115-120 (2013).

2. T. Chen, Y. Si, D. Chen, L. Zhu, D. Xu, S. Chen, D. Zhou, and L. Liu, "The value of 24-hour video-EEG in evaluating recurrence risk following a first unprovoked seizure: A prospective study," Seizure 40, 46-51 (2016).

3. N. B. Karayiannis, S. Srinivasan, R. Bhattacharya, M. S. Wise, J. D. Frost, and E. M. Mizrahi, "Extraction of motion strength and motor activity signals from video recordings of neonatal seizures," IEEE Transactions on Medical Imaging 20(9), 965-980 (2001).

4. N. B. Karayiannis, G. Tao, "Improving the extraction of temporal motion strength signals from video recordings of neonatal seizures," In Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance, 22 July 2003, Miami, FL, USA, 87-92 (2003).

5. N. B. Karayiannis, A. Sami, J .D. Frost, M. S. Wise, and E. M. Mizrahi, "Automated extraction of temporal motor activity signals from video recordings of neonatal seizures based on adaptive block matching," IEEE Transactions on Biomedical Engineering 52(4), 676-686 (2005).

6. G. M. Kouamou Ntonfo, G. Ferrari, R. Raheli, and F. Pisani, "Low-Complexity Image Processing for Real-Time Detection of Neonatal Clonic Seizures," IEEE Transactions on Information Technology in Biomedicine 16(3), 375382 (2012).

7. L. Cattani, D. Alinovi, G. Ferrari, R. Raheli, E. Pavlidis, C. Spagnoli, and F. Pisani, "Monitoring infants by automatic video processing: A unified approach to motion analysis," Computers in Biology and Medicine 80, 158-165 (2017).

8. E. E. Geertsema, R. D. Thijs, T. Gutter, B. Vledder, J. B. Arends, F. S. Leijten, G. H. Visser, and S. N. Kalitzin, "Automated video-based detection of nocturnal convulsive seizures in a residential care setting," Epilepsia 59(S1), 53-60 (2018).

9. S. Kalitzin, G. Petkov, D. Velis, B. Vledder, and F. L. da Silva, "Automatic Segmentation of Episodes Containing Epileptic Clonic Seizures in Video Sequences," IEEE Transactions on Biomedical Engineering 59(12), 3379-3385 (2012).

10. E. E. Geertsema, G. H. Visser, J. W. Sander, and S. N. Kalitzin, "Automated non-contact detection of central apneas using video," Biomedical Signal Processing and Control 55, 101658 (2020).

11. A. van Westrhenen, G. Petkov, S. N. Kalitzin, R. H. C. Lazeron, and R. D. Thijs, "Automated video-based detection of nocturnal motor seizures in children," Epilepsia 61(S1), S36-S40 (2020).

12. D. Murashov, Yu. Obukhov, I. Kershner, and M. Sinkin, "Detecting Events in Video Sequence Of Video-EEG Monitoring," International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 42(2/W12), 155-159 (2019).

13. D. Murashov, Yu. Obukhov, I. Kershner, and M. Sinkin, "A technique for detecting diagnostic events in video channel of synchronous video and electroencephalographs monitoring data," CEUR Workshop Proceedings 2391, 285-292 (2019).

14. D. M. Murashov, Y. V. Obukhov, I. A. Kershner, and M. V. Sinkin, "An algorithm for detecting events in video EEG monitoring data of patients with craniocerebral injuries," Computer Optics 45(2), 301-305 (2021).

15. K. Obukhov, I. Kersher, I. Komoltsev, and Yu. Obukhov, "Epileptiform Activity Detection and Classification Algorithms of Rats with Post-traumatic Epilepsy," Pattern Recognition and Image Analysis 28(2), 346-353 (2018).

16. B. D. Lucas, T. Kanade, "An iterative image registration technique with an application to stereo vision," Proceedings of Imaging Understanding Workshop, 121-130 (1981).

17. R. E. Kalman, R. S. Bucy, "New results in linear filtering and prediction theory," Journal of basic engineering 83(1), 95-108 (1961).

18. S. L. Jr. Marple , Digital spectral analysis with applications, Prentice-Hall, Inc., Englewood Cliffs, NJ (1987).

19. Yu. I. Zhuravlev, V. V. Ryazanov, and O. V. Sen'ko, Recognition, Mathematical methods. Software system. Practical applications, Phasis, Moscow (2006) [In Russian].

20. R. O. Duda, P. E. Hart, and D. Stork, Pattern classification, 2nd ed., A Wiley-Interscience Publication, New York (2001).

21. J. Dobesberger, G. Walser, I. Unterberger, K. Seppi, G. Kuchukhidze, J. Larch, G. Bauer, T. Bodner, T. Falkenstetter, M. Ortler, G. Luef, and E. Trinka, "Video-EEG monitoring: safety and adverse events in 507 consecutive patients," Epilepsia 52(3), 443-52 (2011).

i Надоели баннеры? Вы всегда можете отключить рекламу.