Научная статья на тему 'НЕЙРОДИНАМИЧЕСКАЯ ДИАГНОСТИКА НАРУШЕНИЙ ДЫХАНИЯ ВО ВРЕМЯ СНА'

НЕЙРОДИНАМИЧЕСКАЯ ДИАГНОСТИКА НАРУШЕНИЙ ДЫХАНИЯ ВО ВРЕМЯ СНА Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
26
8
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ОБСТРУКТИВНОЕ СОННОЕ АПНОЭ / ДИНАМИЧЕСКИЕ НЕЙРОННЫЕ СЕТИ / РЕКУРРЕНТНЫЕ НЕЙРОННЫЕ СЕТИ / ЗАДЕРЖКА СИГНАЛА / МАШИННОЕ ОБУЧЕНИЕ / РАСПОЗНАВАНИЕ ОБРАЗОВ / ПРЕДСКАЗАНИЕ ВРЕМЕННЫХ РЯДОВ

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Девятых Дмитрий Владимирович, Гергет Ольга Михайловна, Берестнева Ольга Григорьевна

Актуальность работы обусловлена необходимостью разработки алгоритмов обнаружения синдрома сонного апноэ у больных бронхиальных астмой. Целью работы являлось исследование эффективности использования нейросетевого подхода для локализации эпизодов обструктивного апноэ у больных бронхиальной астмой, сравнительный анализ эффективности использования для этой цели различных динамических нейросетевых архитектур. Исследовались динамические искусственные нейронные сети трех типов: с фокусированной задержкой по времени; с распределенной задержкой по времени; нелинейные авторегрессионные модели с внешними входами. Применялся программный продукт Matlab Neural Network Toolbox 2014a. В качестве метода обучения нейронной сети использовался алгоритм Resilient Propagation. Особенность метода заключается в том, что для определения величины корректировки синаптического веса не требуется точного значения локального градиента. Достаточно определить, является ли локальный градиент положительным или отрицательным, и менял ли он свой знак по сравнению с предыдущей итерацией. В качестве входных данных, на которых обучались нейронные сети, использовалась база данных пульмонологического отделения третьей городской больницы, г. Томск. В ней содержались 39 полисомнографических записей, типичная длительность которых составляла 8-10 часов. Частота взятия отсчетов записи воздушного потока составляла 11 Гц. Для различных типов динамических нейронных сетей были проведены процессы их обучения и тестирования. Сравнение точности результатов, полученных при работе с обучающей и тестовой выборками, позволило сделать вывод о том, что наиболее эффективное нейросетевое решение должно основываться на архитектуре нелинейной авторегрессии с внешними входами.

i Надоели баннеры? Вы всегда можете отключить рекламу.

Похожие темы научных работ по компьютерным и информационным наукам , автор научной работы — Девятых Дмитрий Владимирович, Гергет Ольга Михайловна, Берестнева Ольга Григорьевна

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

NEURODYNAMIC APPROACH FOR SLEEP APNEA DETECTION

The urgency is based on need for developing algorithms for detecting obstructive sleep apnea episodes in asthma patients. The main aim of the study was developing neural network model for breathing analyses. It will allow recognition of breath patterns and predicting anomalies that may occur. Class of machine learning algorithms includes many models. Widespread feed forward networks are able to efficiently solve task of classification, but are not quite suitable for processing time-series data. The paper describes results of teaching and testing several types of dynamic or recurrent networks: NARX, Elman, distributed and focused time delay. Methods, used in the study, include machine-learning algorithms such as dynamic neural network architectures: focused time-delay network; distributed time-delay network; non-linear autoregressive exogenous model; using Matlab Neural Network Toolbox 2014a software. For the purpose of research we used dataset, that contained 39 recording. Records were obtained by pulmonology department of Third Tomsk City Hospital; typical recordings were 8-10 hours long and included electrocardiography and oronasal airflow. Frequency of these signals was 11Hz. Results are presented as performance of training and testing processes for various types of dynamic neural networks. In terms of classification accuracy the best results were achieved by non-linear autoregressive exogenous model.

Текст научной работы на тему «НЕЙРОДИНАМИЧЕСКАЯ ДИАГНОСТИКА НАРУШЕНИЙ ДЫХАНИЯ ВО ВРЕМЯ СНА»

PRIKASPIYSKIY ZHURNAL: Upravlenie i Vysokie Tekhnologii (CASPIAN JOURNAL: Management and High Technologies), 2014, 4 (28) SIGNAL AND DATA PROCESSING, PATTERN RECOGNITION,

REVEALING OF REGULARITIES AND FORECASTING_

7. System Alpha. Available at: http://www.alfasystem.ru/ (accessed 20 October 2014). (In Russ.)

8. Uiller D., Chambers D. Statisticheskoe upravlenie protsessami [Statistical Process Control]. Moscow, Alpina Business Book, 2009. 409 p.

9. Bartlett P. Generalization performance of support vector machines and other pattern classifiers. Advances in Kernel Methods. Cambridge, MIT Press, 1998, pp 43-54.

10. Ishikawa K. What is Total Quality Control? The Japanese Way. London, Prentice Hall, 1985. 217 p.

11. ShewartW. A. Economic Control of Quality of'Manufactured Product. Van Nordstrom, 1931. 18p.

12. Simatic PCS7. Available at: http://iadt.siemens.ru/products/automa-tion/simatic_pcs7/ (accessed 15 October 2014). (In Russ.)

13. Simatic WinCC. Available at: http://iadt.siemens.ru/products/automa-tion/Simatic_hmi/wincc/ (accessed 15 October 2014). (In Russ.)

NEURODYNAMIC APPROACH FOR SLEEP APNEA DETECTION1

Статья поступила редакцию 20.11.2014, в окончательном варианте 13.12.2014.

Devyatykh Dmitriy К, post-graduate student, National Research Tomsk Polytechnic University, 30 Lenin Avenue, Tomsk, 634050, Russian Federation, e-mail: ddv.edu@gmail.com

Gerget Olga M., Ph.D. (Engineering), National Research Tomsk Polytechnic University, 30 Lenin Avenue, Tomsk, 634050, Russian Federation, e-mail: olgagerget@mail.ru

Berestneva Olga G., D.Sc. (Engineering), Professor, National Research Tomsk Polytechnic University, 30 Lenin Avenue, Tomsk, 634050, Russian Federation, e-mail: ogb6@yandex.ru

The urgency is based on need for developing algorithms for detecting obstructive sleep apnea episodes in asthma patients. The main aim of the study was developing neural network model for breathing analyses. It will allow recognition of breath patterns and predicting anomalies that may occur. Class of machine learning algorithms includes many models. Widespread feed forward networks are able to efficiently solve task of classification, but are not quite suitable for processing time-series data. The paper describes results of teaching and testing several types of dynamic or recurrent networks: NARX, Elman, distributed and focused time delay. Methods, used in the study, include machine-learning algorithms such as dynamic neural network architectures: focused time-delay network; distributed time-delay network; non-linear autore-gressive exogenous model; using Matlab Neural Network Toolbox 2014a software. For the purpose of research we used dataset, that contained 39 recording. Records were obtained by pulmonology department of Third Tomsk City Hospital; typical recordings were 8-10 hours long and included electrocardiography and oronasal airflow. Frequency of these signals was 11Hz. Results are presented as performance of training and testing processes for various types of dynamic neural networks. In terms of classification accuracy the best results were achieved by non-linear autoregressive exogenous model.

Keywords: Obstructive sleep apnea, overlap syndrome, dynamic neural networks, recurrent neural networks, tap delay lines, feedback connections, machine learning, resilient propagation, pattern recognition, time-series prediction

1 The report study was partially supported by RFBR, research project № 14-07-00675. The article is written as a part of the project № 1957 Government Task «Science» of the Ministry of Education of Russian Federation. The article was presented at Joint Conference Knowledge Based Software Engineering 2014, Volgograd. Работа выполнена в рамках проекта № 1957 Гос. задания «Наука» Министерства образования и науки РФ. Работа была представлена на международной конференции Joint Conference Knowledge Based Software Engineering 2014, Волгоград.

ПРИКАСПИЙСКИЙ ЖУРНАЛ: управление и высокие технологии № 4 (28) 2014 ОБРАБОТКА СИГНАЛОВ И ДАННЫХ, РАСПОЗНАВАНИЕ ОБРАЗОВ, ВЫЯВЛЕНИЕ ЗАКОНОМЕРНОСТЕЙ И ПРОГНОЗИРОВАНИЕ

НЕЙРОДИНАМИЧЕСКАЯ ДИАГНОСТИКА НАРУШЕНИЙ ДЫХАНИЯ ВО ВРЕМЯ СНА

Девятых Дмитрий Владимирович, аспирант, Национальный исследовательский Томский политехнический университет, 634050, Российская Федерация, г. Томск, пр. им. В.И. Ленина, 30, e-mail: ddv.edu@gmail.com

Гергет Ольга Михайловна, кандидат технических наук, Национальный исследовательский Томский политехнический университет», 634050, Российская Федерация, г. Томск, пр. им. В.И. Ленина, 30, e-mail: olgagerget@mail.ru

Берестнева Ольга Григорьевна, доктор технических наук, профессор, Национальный исследовательский Томский политехнический университет, 634050, Российская Федерация, г. Томск, пр. им. В.И. Ленина, 30, e-mail: ogb6@yandex.ru

Актуальность работы обусловлена необходимостью разработки алгоритмов обнаружения синдрома сонного апноэ у больных бронхиальных астмой. Целью работы являлось исследование эффективности использования нейросетевого подхода для локализации эпизодов обструктивного апноэ у больных бронхиальной астмой, сравнительный анализ эффективности использования для этой цели различных динамических нейросетевых архитектур. Исследовались динамические искусственные нейронные сети трех типов: с фокусированной задержкой по времени; с распределенной задержкой по времени; нелинейные авторегрессионные модели с внешними входами. Применялся программный продукт Matlab Neural Network Toolbox 2014a. В качестве метода обучения нейронной сети использовался алгоритм Resilient Propagation. Особенность метода заключается в том, что для определения величины корректировки синаптического веса не требуется точного значения локального градиента. Достаточно определить, является ли локальный градиент положительным или отрицательным, и менял ли он свой знак по сравнению с предыдущей итерацией. В качестве входных данных, на которых обучались нейронные сети, использовалась база данных пульмонологического отделения третьей городской больницы, г. Томск. В ней содержались 39 полисомнографических записей, типичная длительность которых составляла 8-10 часов. Частота взятия отсчетов записи воздушного потока составляла 11 Гц. Для различных типов динамических нейронных сетей были проведены процессы их обучения и тестирования. Сравнение точности результатов, полученных при работе с обучающей и тестовой выборками, позволило сделать вывод о том, что наиболее эффективное нейросетевое решение должно основываться на архитектуре нелинейной авторегрессии с внешними входами.

Ключевые слова: обструктивное сонное апноэ, динамические нейронные сети, рекуррентные нейронные сети, задержка сигнала, обратные связи, машинное обучение, эластичное сопротивление, распознавание образов, предсказание временных рядов

Introduction. Having both breath disorders during sleep and asthma may be described as a collective term - overlap syndrome [15]. Degree of health damage that overlap syndrome does is much worse than each of its components does by its own. The term health damage means reducing performance of respiratory function [3, 16]. Thus, it is necessary to recognize and classify clinical data and provide better ways of curing obstructive breathing disorders during sleep that asthma patients have. Nowadays analyzing polysomnography is common way of apnea diagnosing. It includes several signals synchronous recording. Such signals usually are: electrocardiography, oronasal airflow, blood oxygen saturation. Such recording are usually obtained in sleep laboratories. The majority of researches nowadays use polysomnography records to count minutes with apnea episodes during an hour, amount of minutes with apnea episode defines apnea-hypopnea index that indicates disease severity. Intelligent decision making systems may extend possibilities of apnea diagnosing [19].

PRIKASPIYSKIY ZHURNAL: Upravlenie i Vysokie Tekhnologii (CASPIAN JOURNAL: Management and High Technologies), 2014, 4 (28) SIGNAL AND DATA PROCESSING, PATTERN RECOGNITION, REVEALING OF REGULARITIES AND FORECASTING

The purpose of this article is to research effect of dynamic neural network architecture on accuracy of breath-patterns time-series classification and precise apnea episode detecting. Apnea nidex was not counted, but exact start and end point of each apnea episode were located.

Data. For the purpose of research we used dataset that contained 39 records. They were obtained by pulmonology department of Third Tomsk City Hospital. Typical recordings durations were 8-10 hours and included electrocardiography (ECG) and oronasal airflow. It is worth mentioning that amount of records is not equal to number of patterns that are going to be provided for network during learning process. Patients with high severity of apnea may have 5-14 apnea episodes per hour. Whole dataset included approximately 1500 breath patterns that could be classified as apnea episode. That amount of breath patterns combined with normal oronasal episodes was enough to create not over-fitted and resistant to input signal variations network

Frequency of polysomnography signals (ECG, oronasal airflow, chest movement) was 11Hz. Such frequency is not mandatory, but medical experts prefer 2-15 Hz polysomnography frequency range because of ECG-derived methods are often used to derive respiration record. Main power frequency of QRS-complex lies in 2-15 Hz range. These particular parts of ECG record are vital for breath pattern modeling, when there is no opportunity to record oronasal data.

The data was collected using hardware-software polygraph complex. It included ECG recording module, three thermal sensors (two for nose and the last one for mouth) that provided oronasal breath pattern and chest movement detectors. So polysomnography included ECG, oronasal and chest movement data - they were recorded during patients sleep. Common sensors position scheme on patient body is presented at fig. 1.

' PotyMffinogram record (over lime) '

Blood

to vet _ . V Decfvasam \

C*»Cr Javrt Breath«* affflr " —"J , rf ...Li III H&ght ■ tcn&ft of event iiíiim'kál ,¡ii

Top hrwls - wake/REM sleep

Fig. 1. Polysomnography sensors location (take from http://www. charlestonpulmonology. com/images/sleep-study2. JPG)

Most of the patients had asthma and represented wide age brackets (25-60 years old). Patients have not been divided by age or gender categories, dataset included both men and women.

ПРИКАСПИЙСКИЙ ЖУРНАЛ: управление и высокие технологии № 4 (28) 2014 ОБРАБОТКА СИГНАЛОВ И ДАННЫХ, РАСПОЗНАВАНИЕ ОБРАЗОВ, ВЫЯВЛЕНИЕ ЗАКОНОМЕРНОСТЕЙ И ПРОГНОЗИРОВАНИЕ

Each record has been preliminarily analyzed by sleep physiologists who marked obstructive sleep episodes with high precision. Unlike most data sets that could provide information about apnea-hypopnea index, our data was supported by annotations that revealed exact seconds when apnea began and when it did end. So each recording consists of two same dimensional time-series, the first includes oronasal airflow value with 11 Hz frequency; the second - includes «0» and «1» values («0» values correspond to «no apnea» while several «1» values flagged current segment of oronasal record as «apnea episode».

Dynamic neural network architecture. An artificial neural network is a system, based on the operation of biological neural networks. In other words, it emulates biological neural system. As its biological predecessor it consists of neurons, basic elements of network. There are a large number of different types of networks, but they all are characterized by the following components: a set of neurons, and set of connections between them.

Artificial Neuron. The neuron, in its turn, consists of three basic elements. The synapses of the biological neuron are modeled as weights. Let's remember that the synapse of the biological neuron is the one which interconnects the neural network and gives the strength of the connection For an artificial neuron, the weight is a number, and represents the synapse. A negative weight reflects an inhibitory connection, while positive values designate excitatory connections. The following components of the model represent the actual activity of the neuron cell. All inputs are summed altogether and modified by the weights. This activity is referred as a linear combination. Finally, an activation function controls the amplitude of the output. For example, an acceptable range of output is usually between 0 and 1, or it could be -1 and 1. At fig. 2 mathematical model of artificial neuron is presented.

Fig. 2. Artificial neuron mathematical model

From this model the interval activity of the neuron can be shown to be:

p

VA =IVV <1>

7=1

where W. ■ denotes weight of ^-neuron connected with /-input from previous layer, Vk is induced local field on neuron (the weighted sum of all synaptic inputs).

PRIKASPIYSKIY ZHURNAL: Upravlenie i Vysokie Tekhnologii (CASPIAN JOURNAL: Management and High Technologies), 2014, 4 (28) SIGNAL AND DATA PROCESSING, PATTERN RECOGNITION,

REVEALING OF REGULARITIES AND FORECASTING_

Activation Function. The output of the neuron, would therefore be the outcome of some activation function on the value internal activity. As mentioned previously, the activation function acts as a squashing function, such that the output of a neuron in a neural network is between certain values. Common activation functions are logistic and hyperbolic tangent respectively:

p(v) = l/(l + exp(-v)), (2)

(p{v) = (exp(2x) -1) / (exp(2x) +1). (3)

These functions produce outputs between 0 and 1 or between -1 and 1 respectively. Also should be mentioned that their derivatives are easy to be calculated, what is vital for training process.

Neural Network Topology. Neurons are used as basic building blocks for creating networks of various architectures. Neural networks can be classified as dynamic and static categories. Static (feedforward) networks have no feedback elements and contain no delays; the output is calculated directly from the input through feedforward connections, multilayer perceptron is shown at fig. 3.

Input Layer Hidden Layer Output Layer

Implicit time representation. Essential part of the network functioning is the time. It can be represented in a continuous or discrete form, but no matter the form st takes, it is the basis of signal processing. Embedding time neural network process operation can be realized implicitly for which time has an indirect effect on the signal processing, i.e. it is not supplied to the input neurons of the network. The implicit representation of time allows to endow a static network with property of dynamic.

In order to transform static neural network into dynamic it is required to add memory feature [7]. There are long-term and short-term memory types. The long-term memory exists even in the multilayer perceptron. It is integrated into the network during training procedure, when the informative content of the training set of data stored in the network in the form of the weighting values. The simplest and most common form of short-term memory is implemented as a memory based on the tapped delay line [11]. Fig. 4 shows a neuron with a built-in short-term memory based on the tapped delay line and the static network. Input signal includes current value of x{n) and x(n - 1). x(/? - 2)..... x(n - p) past values that are being stored in short-term memory.

Another way of time representation in neural networks is applying feedback connections. Whether feedbacks connect neurons from same or distant layers they are classified as local or global.

ПРИКАСПИЙСКИЙ ЖУРНАЛ: управление и высокие технологии № 4 (28) 2014 ОБРАБОТКА СИГНАЛОВ И ДАННЫХ, РАСПОЗНАВАНИЕ ОБРАЗОВ, ВЫЯВЛЕНИЕ ЗАКОНОМЕРНОСТЕЙ И ПРОГНОЗИРОВАНИЕ

x(tj —I—DsJ-o-L^J

Fig. 4. Static neural network with tap-delay memory line

Creating of dynamic neural network starts with choosing some static architecture like per-ceptron and by adding local and global recurrent connection and tap delay memory. Amount of possibilities and lack of methodology make this approach pretty complex, it bring wide specter of dynamic forms of network which is hard to choose from. However, all these networks with various forms of implicit-time representation may be classified as dynamic neural networks. Unlike static networks they take into consideration temporal structure of the data [6].

To perform task of localizing and recognizing apnea episodes in breath patterns we experimented with three architectures of dynamic neural networks. These exact architectures were chosen because each of them represents basic way of implementing one of the dynamic network key topological features.

Focused Time Delay Neural Network (FTDNN). This topology of network is the most straightforward. It involves static feed-forward network with tapped-delay input. It is a general dynamic network, dynamics appears only on stage of presenting input signal which is supplemented by previous values. Despite being dynamic there is arguing whether this network is dynamic or static because it may be presented as multi-layer perceptron with additional input neurons. However, it can use time-sequence as input, i.e. input signal length is not restricted by amount of neurons in input layer of the network.

Distributed Time Delay Neural Network (DTDNN). This topology of network implies presence of tapped delay line memory not only for input signals, but also for hidden layer. That means that we present current input signal to corresponding layer of network, additionally through short-term memory lines we supplement network with past input signals. However, the activations of hidden layer neurons are not instantly transferred to output neurons, but are circulating between input and hidden layer to generate internal states of network.

Nonlinear Autoregressive Exogenous Inputs (NARX). Such models have much in common with distributed networks. The distinction lies m additional signals that are given to network. While other dynamic networks may extract additional information from input sequence by using past values of internal states. This topology allows supplementing of network with past values of output signals.

The defining equation for the NARX model is

y(t) = f ( v(t-1 \y{t -2),...,y(t- ny )M.t - l)rtt -1 )Mt - nu)) (4)

Here, y defines output at t time step, u denotes input vector submitted to input layer at t time step. The key feature of this architecture is that the next value of the output signal y(t) depends on both past and input signals. Depending on current layer input signal may be taken from dataset or from previous layer neuron. From equation (4) it remains unclear what types of output signals should input neurons take. There are two ways to get previous output signals.

PRIKASPIYSKIY ZHURNAL: Upravlenie i Vysokie Tekhnologii (CASPIAN JOURNAL: Management and High Technologies), 2014, 4 (28) SIGNAL AND DATA PROCESSING, PATTERN RECOGNITION,

REVEALING OF REGULARITIES AND FORECASTING_

Open loop. During supervised training of network the true output is known, and these true outputs are send back to network for generating an output signal at t time step.

Close loop. In real life situations it is impossible to expect true output signals as previous values for NARX network. In close loop state network use its actual output signals at previous time-steps. Such state of network is usually turned on when training and validating processes are complete. The difference betw een two states of network is visualized at fig. 5.

Paralbl Architecture Series-Parallel Architecture

Fig. 5. Parallel architecture (closed-loop) and series-parallel architecture (open-loop) scheme

Training process of dynamic neural networks. Having defined the choice of network architecture, we proceed to the stage of training. The obvious question that arises when choosing a method of optimization of weighting coefficients networks - is it possible to use the same techniques that are used for the training of static networks? This is possible but the network has to be preliminary transformed by expanding dynamic network in more cumbersome static network [8].

Figures 6a and 6b show an example of scanning the dynamic network Ellman with one input and one hidden neurons in a static network. Arrows define flow of input signal while forward propagation. Unfolded network has hidden neuron that is connected to itself via feedback, while modified version is simple forward propagation network with additional layer. This method of unfolding the network does not alternate algorithms for generating the output values and serves mostly for convenient presentation of network in computing systems.

Output

y(n +1)

Fig. 6. Elman neural network: a) before unfolding; b) after unfolding

As seen from this figure, the dynamic network after conversion to static has much in common with the perceptron. Supervised learning algorithms based on error back propagation, are applicable for dynamic networks [5]. With such an unfolding every feedback leads to an additional hidden layer, i.e., the more feedback connects are implemented, the more hidden layers are added

ПРИКАСПИЙСКИЙ ЖУРНАЛ: управление и высокие технологии № 4 (28) 2014 ОБРАБОТКА СИГНАЛОВ И ДАННЫХ, РАСПОЗНАВАНИЕ ОБРАЗОВ, ВЫЯВЛЕНИЕ ЗАКОНОМЕРНОСТЕЙ И ПРОГНОЗИРОВАНИЕ

to unfolded version of network. One of the modern neural networks developing trends is associated with the study of multilayer perceptrons with many hidden layers. These networks in the literature are called «deep» [10]. One of the main problems faced by researchers when dealing with such networks is the vanishing of gradients [9]. In this case, the network, even without being stuck in the local minimum, ceases to learn, since according to the change weights formulas, the new values of the weighting factors vary too slowly. This leads to requirements for teaching method in which the changes of the weights coefficients are represented by

A wJI(n) = ri5J(n)yI(n) (5)

will not take into account exact value of local gradient

5J(n) = eJ(n)(p'J(vJ(n)), (6)

where e ■ (n) - error signal, rj - constant that defines speed of weight changing.

For deep and unfolded dynamic networks same methods can be successfully applied. Genetic algorithms [17] and resilient propagation, also known as an algorithm RProp [18] are preferable. As for genetic algorithm, it is based on random search, it is heuristic way of optimization -thus it do not require computing gradient values. Resilient propagation belong to group of gradient search methods, but it requires only sign of gradient, change of weight mostly depends on whether local gradient is positive or negative.

Resilient propagation algorithm is defined as

Ay. (n) = 77+Ay. (n), (dE(n) / Sma )(dE(n -1) / (dwi}) > 0, (7)

or

Ay. (n) = r, Ay. (n), (dE(n) / <9wy )(dE(n -1) / (dwv ) < 0, (8)

0<77 <1<77 + , (9)

where 8E/ <9wy =-ej{n)(p'jiyj{ny)yj{n). If current iteration derivative changed its sign it means that last adjustment of weight was significant and local minimum was skipped by algorithm. Thus weight correction value should be changed from TJ+ to Tf , and return weight value to previous. If sign of derivative remains the same, the previous correction value is summed with TJ+. By locking

rf and Tf values it is possible to get rid of constant values used in common back propagation algorithm. Different variation of Rprop algorithm allows limitation of maximum and minimum correction applied to weights, weight initialization approaches.

Experimental results. This part reveals results of detecting sleep apnea using dynamic neural networks with different architectures.

Training and testing datasets. To recognize apnea we used several dynamic neural network types. As feature vectors we used two input signals from database. One of them took part in learning process, another was picked for testing purposes. These signals are shown on figures 7a and 7b.

Input signals have different length, and that is significant advantage compared to static network, that are suitable only for processing vectors of same dimensions. To each input signal, we had corresponding target signals, designated in figures 8a and 8b. They may look similar, but it should be considered that every target signal could be examined only in conjunction with corresponding input signal.

PRIKASPIYSKIY ZHURNAL: Upravlenie i Vysokie Tekhnologii (CASPIAN JOURNAL: Management and High Technologies), 2014, 4 (28) SIGNAL AND DATA PROCESSING, PATTERN RECOGNITION, REVEALING OF REGULARITIES AND FORECASTING

o

time, 1/11 s

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

time, 1/11 s

a) b)

Fig. 7a. Learning input signal; 7b. Testing input signal

it o-6

time, 1/lls

a)

Fig. 8a. Learning target signal; 8b. Testing target signal

The more signals we present for learning the better network will work. However, training and testing datasets consisted of relatively short (40-60 sec. long) segments of polysomnography signals. The longer signals from training data set are, the more it will take to train network, but the purpose was to compare effect of neural networks architecture on accuracy not training speed. Short signals accommodated full apnea episodes and dataset that consisted of them did not increase training time significantly.

Network Training and Topology Characteristics. We researched three different architectures of network with several constants and variable parameters. Constants were the following.

• Training methods - resilient back propagation;

• Neurons in hidden layer - 10;

• Stop criteria:

o Successful validation checks (30 % of learning input sequence);

o Mean Square Error (MSE) value achieved (10 "};

o Maximum amount of iterations achieved (1000);

o Minimum performance gradient achieved (10 ).

Variable parameters are time-delays for layers of network. Each network has been trained four times with one of stopping criteria being active. For each network we gained four training and testing results, depending on what stop criteria finished training. From these results we chose those,

ПРИКАСПИЙСКИЙ ЖУРНАЛ: управление и высокие технологии № 4 (28) 2014 ОБРАБОТКА СИГНАЛОВ И ДАННЫХ, РАСПОЗНАВАНИЕ ОБРАЗОВ, ВЫЯВЛЕНИЕ ЗАКОНОМЕРНОСТЕЙ И ПРОГНОЗИРОВАНИЕ

that provided lowest mean-square error. For creating, training and scoring network we used Mat-Lab Neural Network Toolbox.

Apnea Detecting Results. Following tables represents network classification accuracy depending on amount of time-delays and topology type.

Table 1

Focused time delay network apnea detecting results

Topology

FTDNN Amount of time delays for input layer

1 5 10 20 40 100

Lowest MSE of 4 attempts Train Test Train Test Train Test Train Test Train Test Train Test

0.022 0.106 0.012 0.110 0.002 0.157 le-5 0.193 1*10"8 0.263 1*10"7 0.36

Focused delay showed that increasing time delays for input layer may increase performance during training, but it also reduces accuracy during testing process. Such tendencies took place for all 4 stopping criteria applied. None of networks achieved appropriate accuracy for testing sequence.

Table 2

Distributed time network apnea detecting results

Topology

DTDNN Amount of time delays for input: hidden layers

1:1 5:5 10:5 20:5 40:10 100:20

Lowest MSE of 4 attempts Train Test Train Test Train Test Train Test Train Test Train Test

0.012 0.118 0.004 0.112 9*104 0.094 3*10"4 0.113 4*10"5 0.17 l*l(r4 0.5

For this type of network it is not necessary to set same amount of time delays for input and hidden layers. Time delays for hidden layer significantly increased time of learning, not in terms of amount iterations, but in term of computational speed. Focused time delay also did not manage to achieve appropriate accuracy. For apnea detecting it is obvious that such types of dynamic networks like FTDNN and DTDNN require increasing of learning sequences. Even simple perceptron can provide good results with good learning data set. Nevertheless, creating such small learning sequence reveals abilities of networks.

NARX network showed best results. However, it required precise number of input and output layer delays. Next table provides mean-square error values for training and testing in open-loop state of network and close-loop as well.

Table 3

NARX apnea detecting results

Topology

NARX(open -loop) Amount of time delays for input: output layers

1:1 5:5 10:10 20:20 40:40

Train Test Train Test Train Test Train Test Train Test

5* КГ* 1*10"° 0.921 8*10° 1.942 9*10"4 0.429 9*10"3 0.107

NARX(close -loop) Train Test Train Test Train Test Train Test Train Test

0.337 0.312 0.98 2.32 1.152 2.316 2.561 3.017 1.125 1.141

PRIKASPIYSKIY ZHURNAL: Upravlenie i Vysokie Tekhnologii (CASPIAN JOURNAL: Management and High Technologies), 2014, 4 (28) SIGNAL AND DATA PROCESSING, PATTERN RECOGNITION, REVEALING OF REGULARITIES AND FORECASTING

Amount of time delays for input : output layers

NARX(open -loop) 60:60 100:100 150:150 170:170 25:170

Train Test Train Test Train Test Train Test Train Test

3*10"5 0.338 1*10"6 0.178 mo-7 0.082 1*10"8 0.08 1*10"8 0.064

NARX(close -loop) Train Test Train Test Train Test Train Test Train Test

0.637 0.729 0.216 0.382 0.039 0.051 1*10"3 0.07 1*10 4 0.04

NARX network showed supremacy over distributed and focused delay networks in terms of MSE in both closed and open-loop states. Even with small amount of learning and testing sequences it showed appropriate results. It is interesting that asymmetrical delays (input delays amount was not equal to output) showed better accuracy than network with equally large amount of delays. In medical terms apnea episode is supposed to be approximately 10 seconds long. According to frequency of our input signal which was 11Hz we can make a conclusion that time delays should exceed or at least be equal to length of anomaly that we are going to detect in time-sequence. But this rule should determine amount of time-delays only for output layer. Delays for input layer however may be significantly smaller - this improves accuracy a little bit, but more importantly it increases speed of learning because it exclude weights from network.

Conclusions. In this paper we presented results of apnea detecting by analyzing polysomnography data using dynamic neural networks. The main issue that we faced was finding appropriate parameters of network, such as learning algorithm, amount of neurons in hidden layer, activation functions and foremost number of time-delays for dynamic networks. Not all dynamic networks are equally powerful, NARX network proved to be supreme in terms of accuracy. However, this type of networks requires more complicated process of testing because of two states, that have been used (open-loop, closed-loop) - so we had to calculate outputs twice. Assuming that in real-life medical operations there will be no opportunity to instantly get real output values we put in priority closed-loop state results. So if network would work accurately in open-loop, but would misclassify input sequences in closed-loop state, we would consider such network as not appropriate. Although it was not a purpose of research to evaluate speed of learning methods, the more time-delays we assigned, the more iterations it was required to find adequate solution in terms of weights.

As future work we are planning to start using «Physyonet» data bases (http://physionet.org/physiobank/database); increase learning and testing data sets for improving generalization ability of network and implement genetic algorithm for training dynamic neural networks.

Список литературы

1. Avci С. Comparison of the ANN Based Classification accuracy for Real Time Sleep Apnea Detection Methods / C. Avci, A. Akba§ // The 9th International Conference on Biomedical Engineering (BIOMED 2012). - Innsbruck, 15-17 February, 2012. - P. 74-76.

2. Box G. E. P. Time series analysis: forecasting and control / G. E. P. Box, G. M. Jenkins, G. C. Reinsel. -2011.-Vol. 734.-P. 197-199.

3. Cabrero-Canosa M. Intelligent Diagnosis of Sleep Apnea Syndrome / M. Cabrero-Canosa, E. Hernandez-Pereira, V. MoretBonillo // Engineering in Medicine and Biology Magazine. - 2004. - Vol. 23, № 2. - P. 72-81.

4. Correa L. S. Sleep Apnea Detection Based on Spectral Analysis of Three ECG - Derived Respiratory Signals / L. S. Correa, E. Laciar, V. Mut, A. Torres, R. Jane // Annual International Conference of the IEEE Engineering in Medicine and Biology Society. - 3-6 September, 2009. - P. 4723^1726.

5. Cuéllar M. P. An Application of Non-linear Programming to Train Recurrent Neural Networks in Time Series Prediction Problems / M. P. Cuéllar, M. Delgado, M. C. Pegalajar // Enterprise Information Systems VII (Springer Netherlands). -2006. - P. 95-102.

ПРИКАСПИЙСКИЙ ЖУРНАЛ: управление и высокие технологии № 4 (28) 2014 ОБРАБОТКА СИГНАЛОВ И ДАННЫХ, РАСПОЗНАВАНИЕ ОБРАЗОВ, ВЫЯВЛЕНИЕ ЗАКОНОМЕРНОСТЕЙ И ПРОГНОЗИРОВАНИЕ

6. Ebrahimi F. Automatic Sleep Stage Classification Based on EEG Signals by Using Neural Networks and Wavelet Packet Coefficients / F. Ebrahimi, M. Mikaeili, E. Estrada, H. Nazeran // 30th Annual International IEEE Conference. - Vancouver, 2008. - P. 1151-1154.

7. Elman J. L. Finding structure in time / J. L. Elman // Cognitive Science. - 1990. - Vol. 14 - P. 179-211.

8. Giles C. L. Extracting and learning an unknown grammar with recurrent neural networks / C. L. Giles, C. B. Miller, D. Chen, G. Z. Sun, H. H. Chen, Y. C. Lee // Advances in Neural Information Processing Systems 4. - San Mateo, CA : Morgan Kaufmann Publishers, 1992. - P. 317-324.

9. Graves Alex and Schmidhuber Jilrgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale / Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.) // Advances in Neural Information Processing Systems 22 (NIPS'22). - Vancouver, BC : Neural Information Processing Systems (NIPS) Foundation, 2009. - P. 545-552.

10. Hinton G. E. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The shared views of four research groups / G. E. Hinton, et al. // IEEE Signal Processing Magazine. - November 2012. - P. 82-97.

11. Jaeger H. (2001a) Short term memory in echo state networks / H. Jaeger // GMD Report 152. - GMD: German National Research Institute for Computer Science, 2002.

12. Lawrence S. Natural language grammatical inference with recurrent neural networks / S. Lawrence, C. L. Giles, S. Fong // IEEE Transactions on Knowledge and Data Engineering. - 2000. - Vol. 12 (1). - P. 126-140.

13. Lin R. A New Approach for Identifying Sleep Apnea Syndrome Using Wavelet Transform and Neural Networks / R. Lin, R. Lee, C. Tseng, H. Zhou, C. Chao, J. Jiang // Biomedical Engineering: Applications, Basis & Communications. -2006. -Vol. 18, no. 3. - P. 138-143,

14. Maali Y. Signal Selection for Sleep Apnea Classification / Y. Maali, A. Al-Jumaily // Advances in Artificial Intelligence. - 2012, Springer Berlin Heidelberg. - P. 661-671.

15. Mangat E. Sleep apnea, hypersomnolence and upper airway obstruction secondary to adenotonsillar enlargement / E. Mangat, W. C. Orr, R. O. Smith // Arch. Otolaryngol. - 1977. - Vol. 103. - P. 383-386.

16. Newman A. B. Relation of sleep-disordered breathing to cardiovascular disease risk factors: The Sleep Heart Health Study / A. B. Newman, F. J. Nieto, U. Guidry, B. K. Lind, S. Redline, E. Shahar, T. G. Pickering, S. F. Quan //Am. J. Epidemiol. -2001. - Vol. 154. -P. 50-59.

17. Riedmiller M. Advanced supervised learning in multilayer perceptions - from backpropagation to adaptive learning algorithms / M. Riedmiller// International Journal of Computer Standards and Interfaces. - 1994. - Vol. 16 (5).-P. 265-278.

18. Riedmiller M. A direct adaptive method for faster backpropagation learning: The Rprop algorithm. Proceedings of the IEEE International Conference on Neural Networks / M. Riedmiller, H. Braun. - IEEE Press, 1993. -P. 586—591.

19. Tagluk M. E. Classification of Sleep Apnea through Sub-band Energy of Abdominal Effort Signal Using Wavelets and Neural Networks / M. E. Tagluk, N. Sezgin // Journal of Medical Systems. - 2010. - Vol. 34, no. 6.

References

1. Avci C., Akba§ A. Comparison of the ANN Based Classification accuracy for Real Time Sleep Apnea Detection Methods. The 9th International Conference on Biomedical Engineering (BIOMED 2012, Innsbruck, 15-17 February, 2012, pp. 74-76.

2. Box G. E. P., Jenkins G. M., Reinsel G. C. Time series analysis: forecasting and control, 2011, vol. 734, pp. 197-199.

3. Cabrero-Canosa M., Hernandez-Pereira E., MoretBonillo V. Intelligent Diagnosis of Sleep Apnea Syndrome /M. Cabrero-Canosa. Engineering in Medicine and Biology Magazine, 2004, vol. 23, no. 2, pp. 72-81.

4. Correa L. S., Laciar E., Mut V., Torres A., Jane R. Sleep Apnea Detection Based on Spectral Analysis of Three ECG - Derived Respiratory Signals. Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 3-6 September, 2009, pp. 4723^1726.

5. Cuellar M. P., Delgado M., Pegalajar M. C. An Application of Non-linear Programming to Train Recurrent Neural Networks in Time Series Prediction Problem. Enterprise Information Systems VII (Springer Netherlands), 2006, pp. 95-102.

6. Ebrahimi F., Mikaeili M., Estrada E., Nazeran H. Automatic Sleep Stage Classification Based on EEG Signals by Using Neural Networks and Wavelet Packet Coefficients. 30th Annual International IEEE Conference, Vancouver, 2008, pp. 1151-1154.

7. Elman J. L. Finding structure in time.Cognitive Science, 1990. vol. 14, pp. 179-211.

8. Giles C. L., Miller C. B., Chen D., Sun G. Z., Chen H. H., Lee Y. C. Extracting and learning an unknown grammar with recurrent neural networks. Advances in Neural Information Processing Systems 4, San Mateo, CA, Morgan Kaufmann Publishers, 1992, pp. 317-324.

9. Lafferty, John; Williams, Chris K. I.; Culotta Aron (eds.) Graves Alex and Schmidhuber Jilrgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale. Advances in Neural Information Processing Systems 22 (NIPS'22), Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation Publ. House, 2009, pp. 545-552.

PRIKASPIYSKIY ZHURNAL: Upravlenie i Vysokie Tekhnologii (CASPIAN JOURNAL: Management and High Technologies), 2014, 4 (28) SIGNAL AND DATA PROCESSING, PATTERN RECOGNITION,

REVEALING OF REGULARITIES AND FORECASTING_

10.Hinton G. E., et al. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The shared views of four research groups. IEEE Signal Processing Magazine, November 2012, pp. 82-97.

11. Jaeger H. (2001a) Short term memory in echo state networks. GMD Report 152, GMD: German National Research Institute for Computer Science Publ. House, 2002.

12. Lawrence S., Giles C. L., Fong S. Natural language grammatical inference with recurrent neural networks. IEEE Transactions on Knowledge and Data Engineering, 2000, vol. 12 (1), pp. 126-140.

13.Lin R., Lee R., Tseng C., Zhou H, Chao C., Jiang J. A New Approach for Identifying Sleep Apnea Syndrome Using Wavelet Transform and Neural Networks. Biomedical Engineering: Applications, Basis & Communications, 2006, vol. 18, no. 3, pp. 138-143,

14.Maali Y., Al-Jumaily A. Signal Selection for Sleep Apnea Classification. Advances in Artificial Intelligence, 2012, Springer Berlin Heidelberg, pp. 661-671.

15. Mangat E., Orr W. C., Smith R. O. Sleep apnea, hypersomnolence and upper airway obstruction secondary to adenotonsillar enlargement. Arch. Otolaryngol, 1977, vol. 103, pp. 383-386.

16. Newman A. B., Nieto F. J., Guidry U., Lind B. K., Redline S., Shahar E., Pickering T. G, Quan S. F. Relation of sleep-disordered breathing to cardiovascular disease risk factors: The Sleep Heart Health Study. Am. J. Epidemiol., 2001, vol. 154, pp. 50-59.

17. Riedmiller M. Advanced supervised learning in multilayer perceptrons - from backpropagation to adaptive learning algorithms. International Journal of Computer Standards and Interfaces, 1994., vol. 16 (5), pp. 265-278.

18. Riedmiller M., Braun H. A direct adaptive method for faster backpropagation learning: The Rprop algorithm. Proceedings of the IEEE International Conference on Neural Networks, IEEE Press, 1993, pp. 586—591.

19. Tagluk M. E., Sezgin N. Classification of Sleep Apnea through Sub-band Energy of Abdominal Effort Signal Using Wavelets and Neural Networks. Journal of Medical Systems, 2010, vol. 34, no. 6.

УДК 004.4, 004.62, 614

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

ОБРАБОТКА ИНФОРМАЦИИ О ПОКАЗАТЕЛЯХ ЗДОРОВЬЯ

В ПРОЦЕССЕ ПРОВЕДЕНИЯ РЕГИОНАЛЬНОГО МОНИТОРИНГА

ЗДОРОВЬЯ ШКОЛЬНИКОВ

Статья поступила редакцию 26.11.2014, в окончательном варианте 14.12.2014.

Лядов Максим Алексеевич, аспирант, Тамбовский государственный технический университет, 392000, Российская Федерация, г. Тамбов, ул. Советская, 106, e-mail: lyadovmaxim@gmail. com

Фролов Сергей Владимирович, доктор технических наук, профессор, Тамбовский государственный технический университет, 392000, Российская Федерация, г. Тамбов, ул. Советская, 106, email: sergej.frolov@gmail.com

Целью исследования является повышение эффективности методик сбора и обработки информации о показателях индивидуального и общественного здоровья в процессе проведения регионального мониторинга состояния здоровья школьников, в том числе заболеваемости, связанной с алиментарными факторами. Для достижения поставленной цели разработаны методы и алгоритмы, программное и организационно-методическое обеспечение для обработки информации о показателях здоровья школьников - с учетом современных медицинских методик и половозрастных нормативов. Разработанная информационная система (ИС) мониторинга внедрена в муниципальных образовательных учреждениях, лечебно-профилактических учреждениях и центре обработки данных Тамбовской области. Полученные при помощи разработанной ИС мониторинга здоровья школьников результаты показали, что участие образовательных учреждений в проекте по модернизации школьного питания, прежде всего влияет на динамику массы тела учеников, стабильность показателей длины тела и снижение количества заболеваний, связанных с алиментарными факторами. Разработанные ИС и технологии ее использования обеспечивают высокую степень автоматизации процесса обработки информации о показателях здоровья школьников на уровне образовательного учреждения, муниципальном и региональном уровнях.

Ключевые слова: мониторинг здоровья, здоровье детей, автоматизированные информационные системы, методы и алгоритмы обработки информации, алиментарные факторы

i Надоели баннеры? Вы всегда можете отключить рекламу.