Научная статья на тему 'RECOGNITION OF RUSSIAN AND INDIAN SIGN LANGUAGES USED BY THE DEAF PEOPLE'

RECOGNITION OF RUSSIAN AND INDIAN SIGN LANGUAGES USED BY THE DEAF PEOPLE Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
197
31
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
РУССКИЙ ЖЕСТОВЫЙ ЯЗЫК / ИНДИЙСКИЙ ЖЕСТОВЫЙ ЯЗЫК / РАСПОЗНАВАНИЕ ЖЕСТОВ / КОМПОНЕНТЫ ЖЕСТОВ ГЛУХИХ / ИСКУССТВЕННАЯ НЕЙРОННАЯ СЕТЬ / МАШИННОЕ ОБУЧЕНИЕ / НАБОРЫ ОБУЧАЮЩИХ ДАННЫХ

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Elakkiya R., Grif Mikhail G., Prikhodko Alexey L., Bakaev Maxim A.

In our paper, we consider approaches towards the recognition of sign languages used by the deaf people in Russia and India. The structure of the recognition system for individual gestures is proposed based on the identification of its five components: configuration, orientation, localization, movement and non-manual markers. We overview the methods applied for the recognition of both individual gestures and continuous Indian and Russian sign languages. In particular we consider the problem of building corpuses of sign languages, as well as sets of training data (datasets). We note the similarity of certain individual gestures in Russian and Indian sign languages and specify the structure of the local dataset for static gestures of the Russian sign language. For the dataset, 927 video files with static one-handed gestures were collected and converted to JSON using the OpenPose library. After analyzing 21 points of the skeletal model of the right hand, the obtained reliability for the choice of points equal to 0.61, which was found insufficient. It is noted that the recognition of individual gestures and sign speech in general is complicated by the need for accurate tracking of various components of the gestures, which are performed quite quickly and are complicated by overlapping hands and faces. To solve this problem, we further propose an approach related to the development of a biosimilar neural network, which is to process visual information similarly to the human cerebral cortex: identification of lines, construction of edges, detection of movements, identification of geometric shapes, determination of the direction and speed of the objects movement. We are currently testing a biologically similar neural network proposed by A.V. Kugaevskikh on video files from the Russian sign language dataset.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «RECOGNITION OF RUSSIAN AND INDIAN SIGN LANGUAGES USED BY THE DEAF PEOPLE»

ISSN 1814-1196 Научный вестник НГТУ том 79, № 2-3, 2020, с. 57-76

http://journals.nstu.ru/vestnik Science Bulletin of the NSTU Vol. 79, No. 2-3, 2020, pp. 57-76

ИНФОРМАТИКА, ВЫЧИСЛИТЕЛЬНАЯ ТЕХНИКА И УПРАВЛЕНИЕ

INFORMATICS, COMPPUTER ENGINEERING AND CONTROL

UDC 004.93 DOI: 10.17212/1814-1196-2020-2-3-57-76

Recognition of Russian and Indian sign languages

*

used by the deaf people

R. ELAKKIYAM, M.G. GRIF2'4, A.L. PRIKHODKO2c, M.A. BAKAEV2'"

1 613401, India, Thanjavur, Tamil Nadu, SASTRA Deemed University, PhD, AP-III, School of Computing, CSE

2 630073, Russian Federation, Novosibirsk, 20, K. Marx Prospekt, Novosibirsk State Technical University

a elakkiya@cse.sastra.edu b grif@corp.nstu.ru c alexeyayay@yandex.ru d bakaev@corp.nstu.ru

In our paper, we consider approaches towards the recognition of sign languages used by the deaf people in Russia and India. The structure of the recognition system for individual gestures is proposed based on the identification of its five components: configuration, orientation, localization, movement and non-manual markers. We overview the methods applied for the recognition of both individual gestures and continuous Indian and Russian sign languages. In particular we consider the problem of building corpuses of sign languages, as well as sets of training data (datasets). We note the similarity of certain individual gestures in Russian and Indian sign languages and specify the structure of the local dataset for static gestures of the Russian sign language. For the dataset, 927 video files with static one-handed gestures were collected and converted to JSON using the OpenPose library. After analyzing 21 points of the skeletal model of the right hand, the obtained reliability for the choice of points equal to 0.61, which was found insufficient. It is noted that the recognition of individual gestures and sign speech in general is complicated by the need for accurate tracking of various components of the gestures, which are performed quite quickly and are complicated by overlapping hands and faces. To solve this problem, we further propose an approach related to the development of a biosimilar neural network, which is to process visual information similarly to the human cerebral cortex: identification of lines, construction of edges, detection of movements, identification of geometric shapes, determination of the direction and speed of the objects movement. We are currently testing a biologically similar neural network proposed by A.V. Kugaevskikh on video files from the Russian sign language dataset.

Keywords: Russian sign language, Indian sign language, gesture recognition, deaf sign components, artificial neural network, machine learning, training data sets

Ж

Received 15 January 2020.

INTRODUCTION

Communication and collaboration between deaf-mute people and hearing people is hindered by lack of a common language. Although there has been a lot of research in this domain, there is still room for work towards a system that is ubiquitous, non-invasive, works in real-time and can be trained interactively by the user. Sign Language (SL) serves as a communication medium among the deaf and hard of hearing society. In general, SL is not only used primarily by the deaf and hearing impaired community, but also by the hearing community who are all not able to speak or have some trouble with the spoken language because of some other disabilities (augmentative communication). Also, SL is used by people who can hear, but cannot speak because of other conditions such as the Parkinson's disorder. Sign languages are not international and are not the same all over the world. Currently, there is no clarity about the number of sign languages in use worldwide - each and every country has its native SL, and some countries can have more than one SL too. Some of the sign languages in existence are American Sign Language (ASL), British Sign Language (BSL), Chinese Sign Language (CSL), German Sign Language (DSL), Indian Sign Language (ISL), Russian Sign Language (RSL), etc. Also, some sign languages have obtained a legal recognition whereas the others do not have such recognition worldwide.

The problem of the computer-aided SLs recognition has high social importance, and many researchers around the world work on it. Still, currently it cannot be considered satisfactory resolved, mostly due to a low accuracy of the SL recognition.

1. GENERAL OVERVIEW OF THE RUSSIAN AND INDIAN SIGN LANGUAGES USED BY THE DEAF PEOPLE

According to the census released by World Health Organization in 2011 Census, the total population of deaf persons in India numbered about 5 million and the mute persons numbered around 2 million. The Indian Sign Language (ISL) is used in the deaf-mute community all over India. But ISL is not used in schools for deaf children to teach them. Teacher training programs do not orient teachers towards teaching methods that use ISL. There is no teaching material that incorporates a sign language. Parents of deaf children are not aware of the availability of a sign language and its ability to remove communication barriers. ISL interpreters are an urgent requirement at institutes and places where there is communication between the deaf and hearing people but India has only fewer than 300 certified sign language interpreters [1]. There even was an argument that other countries such as Nepal, Sri Lanka, Bangladesh and some border places of Pakistan [2] also use ISL.

Sign language habitually contributes considerable resemblances to their relevant spoken words; however, SL has its own structure and grammar and it varies according to the efficiency and fluency of signing. Although general linguistics consider both signed and spoken languages as different types of the natural language, a sign language should not be considered as a body language just because it is another way of non-linguistic communication. Similarly, ISL also has its own structure, syntax, morphology, phonology and grammatical variations. ISL involves visual conveying of meaning instead of spoken words. This communication

Fig. 1. ISL-alphabets Puc. 1. ISL-a^$aBHTM

involves a simultaneous combination of both manual and non-manual means of expression. Manual parameters include hand shape, hand position, hand orientation, hand trajectories and arms movements while non-manual parameters include facial expressions, head and body postures, mouth and gaze directions. All these expressions together convey an intended meaning and information of the signer in terms of visual projection. ISL consists of both isolated words [3] and continuous sentences like other sign languages. Fig. 1 represents the ISL alphabets. The offi-

cial ISL dictionary is continuously updated starting from 1,000 words in initial release to 3,000 words in the second release and now with a newer vocabulary of 6,000 words in various categories. Unlike ASL and other SLs, ISL is highly complex because:

• It consists of combination of single and double handed sign gestures and it often consists of more double handed signs even for isolated words.

• When it comes to double handed signs, there is a high chance of overlapping of hands and occlusion of hands over facial expressions.

• Hand positioning with respect to face and body implies different signs at different locations.

In India, it is estimated that more than one million deaf adults and more than half million deaf children use ISL [4]. But still there were certain limitations to develop a dictionary for ISL which arise due to cultural factors and societal impacts. Some of them [2] are:

• In rural parts of India, impairments are ill-treated and signing with gestures is not motivated by people.

• Until the late 1990s, it was believed there was no such a thing as ISL and so the Indian system lacked research into ISL linguistics.

• The non-availability of standardization of the lexicon, syntax and grammar of ISL with documentation and non-availability of ISL automation tools for learning.

• Availability of ISL interpreters is often problematic.

The focus on ISL studies began after 1978 and it was finally accepted countrywide to use ISL. ISL was a language in its own right and a few hundred sign languages are used in cities such as Delhi, Mumbai, Kolkata and Bengaluru [5]. Later, the Ramakrishna Mission Vivekananda University [6] collated signs from across 42 places in the country and released a sign dictionary for 1,600 words. ISL Recognition (ISLR) is a breakthrough for helping impaired (deaf-mute) people and has been researched in recent years. Unfortunately, every reported research has its own limitations and is still unable to be used commercially. Some of the studies achieved certain success in recognizing SL, but required high costs to be commercialized. Nowadays, researchers pay more attention for developing ISLR that can be used commercially. Tracking and recognizing specialized multimodal gesture signs are crucial, especially in recognizing signs and sign gestures.

The Russian Sign Language is utilized by the hearing-impaired people in the Russian Federation - 120,5 thousand users, according to the Census of 2010 - and to some extent in the former Soviet republics. Despite a significant number of users, the RSL has got its official backing just recently, after the amendments to the Federal Law "On the social security for the invalids in the Russian Federation" were signed by President V. Putin at the end of 2012. There, the RSL is defined as "the communication language used in case of hearing and/or speaking impairments, particularly in the context of the oral use of the Russian Federation's official state language".

A comprehensive review and the comparison of the dictionaries for the three different variants (dialects) of RSL - the St. Petersburg, Moscow, and Siberian dialects - can be found for example in [7]. At the first stage of their research, continuous sampling was done from the four RSL dictionaries [8-11], so that the signs were extracted and organized alphabetically. The total number of signs in the resulting sample amounted to nearly 13,000 lexical items. At the second stage, they composed a

comparative table of the signs included in the above lexicographic sources, and performed a comparative analysis. The research resulted in the joint list of the signs contained in the covered dictionaries, in total including 6,200 items.

The subsequent analyses of the language material was aimed at identifying the number of the signs that correspond to homonyms and polysemantic words in the Russian language [12], as well as refining the understanding of the performance of the signs corresponding to homonyms [13], of which 54 sign pairs were identified. Unlike the spoken homonyms, the signs are shown differently, but their performance allows accurate communication of the meaning. For the polysemantic words, in total 280 signs were identified, and their particularity is that the different performance of the corresponding signs allows the communication of the meaning without relying on the context. Some signs from this group are imitative, and some have performance which is similar to the non-verbal component that accompanies the corresponding terms in the Russian language.

The particular features of the RSL word-formation system are as follows:

1. The basic units of the word-formation system are chains, paradigms and nests in which motivating and motivated gestures are highlighted, and the motivating words of the Russian speaking language are not always the names of the motivating gestures.

2. The system does not have the means that fully correspond to the wordformation formants of the Russian language. However, this system has its own specific means of forming new gestures. Since the sign language uses visual-kinesthetic channel for transmitting information instead of sounds, the gestures similar to one-root words can be created using a combination of two independent gestures of the RSL, adding special additional gestures (for example, the gesture meaning man) to the nominative gesture, repeating an additional gesture, changing the amplitude / intensity of the gesture, its localization, converting a one-handed gesture into a two-handed gesture, using facial expressions and / or turning the body when performing the gesture. The above means are applied systematically, which suggests the existence of original word-formation models in RSL, some of which have analogues in the Russian speaking language.

3. The sign formation techniques in RSL can differ:

- similar performance of the gestures that are cognates from the point of view of word formation of the Russian language, but are not included in word-formation chains;

- identical performance of the gestures whose analogues are cognates in the Russian language;

- dissimilarity in the performance of the gestures, whose names in Russian are cognates.

RSL has gestures similar to the classes of words that are called parts of speech in the Russian-speaking language. In RSL, noun gestures predominate (at least 66 %); while in the 20th century there were more adjective gestures. The shares of the verb gestures are almost identical in all available dictionaries (9-11 %), and generally the data on the number of gestures have slight differences similar to numerals, pronouns, adverbs, participles, conjunctions, interjections, prepositions and particles. For instance, the I.F. Geilman dictionary does not contain predicates, while the video dictionary developed by the Social Support Institute of NSTU lacks modal words.

2. THE STRUCTURE OF THE SIGN LANGUAGES RECOGNITION SYSTEM

The main objective of the SL Recognition (SLR) system is to recognize a large vocabulary under unrestricted environments which would make the communication between the hearing-impaired community and the normal hearing people easy. SL linguistics is mainly composed of three components; namely manual signals, non-manual signals and finger spelling. Manual signals are made only with hand gestures employing hand shape, position, orientation and motion trajectories, non-manual signals are those made with facial expressions, body postures and head positions which are used as a part of the sign or to modify the meaning of a manual signals, and finger spelling are the gestures which spell out the words as individual letters using local verbal language.

Manual linguistics is the essential component that is required to recognize the sign language. The manual signals are further divided in terms of three major components namely hand shape, hand motion and place of articulation. When considering the manual signals without incorporating the non-manual components, it is treated as a subset of elements of gestural communication. Also, manual signals are highly structured, restricted and more complex concerning the two-handed signs.

Albeit, the manual cue analysis is treated as a part of gesture communication and it needs more personalized methods in case of solving large-vocabulary sign recognition system, or in analyzing the correlation of hands. Many of the existing approaches in SLR focus on hand postures that are static hand shapes ignoring the fact that many sign languages contain signs with motion invariants. When it comes to a large-vocabulary recognition system, it is highly infeasible to recognize all signs only with the help of static postures. Fig. 2 shows various methods of extracting manual signals in SLR systems.

A non-manual signal in SLR plays a vital role in conveying the significant amount of meaningful information in addition to manual signals. The most useful non-manual cues are Facial Expressions, Lip Movements and Head Pose Estimation. The non-manual cue expressions are raising or lowering the eyebrows, eye gaze, head nods and shakes, nose wrinkling, lip movements and different degrees of eye aperture. These cues will manoeuvre as an indicator and provide supplementary information to work as a modulation function involved in adding lexical and semantic properties of signs. Combination of these facial expressions and head pose estimation helps in understanding certain grammatical status which includes question types, negations, 'when' clauses and relative clauses.

3. APPROACHES TOWARDS RECOGNITION OF SIGN LANGUAGES

Research in the field of SLR mainly focuses on two dimensions, namely Isolated Sign Recognition (SR) and Continuous Sentence Recognition (CSR). The isolated word recognition involves recognizing single and double hand static postures that the signer performs to convey information, whereas the continuous recognition involves identifying the sequence of gestures signed by the signer one after the other.

Fig. 2. Extraction of manual signal in SLR Рис. 2. Распознавание ручного жеста в SLR

Among these two recognition problems, CSR is quite different because in hand gesture recognition it is considered as gesture spotting and in sign language recognition the problem is considered as a co-articulation problem. The co-articulation problem makes recognition more complex because the ascendant sign affects the descendant sign and the transition between the signs i.e. Epentheses Movements (EM) ought to be explicitly or implicitly modelled to be integrated into the recognition systems.

ISLR research started with the recognition of ISR and CSR based on the Device-Based approaches using sensors and trackers. Albeit these approaches produce accurate results in tracking and pointing the gestures, the signer loses their natural way of signing by being constantly required to wear burdensome devices or trackers on their hands. On the other hand, Vision-Based approaches in sign language recognition provide a user-friendly environment for signers. However, this approach also faces several challenges in CSR which are handling occlusion of hands over face, the co-articulation problem, segmenting, detecting hand and finger configuration and modelling the transition movements between the signs. In order to overcome these challenges, many vision-based approaches use different coloured gloves on hands or colour markers for fingers. Despite all these, the marker-free recognition of sign language detection, recognition and classification in cluttered and unrestricted environments is an open research problem.

Let us consider individual signs recognition for the ISL. An ISLR system was proposed by Nandi et al. [14] and recognized 22 ISL signs with an accuracy of

92.29 %. Rekha et al. [15] produced an accuracy of 91.30 % for 26 ISL gestures using 2D computer vision techniques. However, their proposed approach suffered from varying illuminations. Lilha and Shivmurthy [16] developed an ISL recognition system to recognize static and dynamic sign gestures. Their system achieved an accuracy of 98.1 % but a signer would lose the natural way of signing due to the necessity of wearing a wristband to differentiate palm and forearm. Adithya et al. [17] used Artificial Neural Networks (ANN) to recognize ISL alphabets and numbers. Their system showed an accuracy of 91.1 % but failed to cope up in the real time environment. Dixit and Jalal [18] proposed an approach to recognize single and double handed ISL gestures of 720 isolated words and attained an accuracy of 96.2 %.

Ananya et al. [4] adapted Conditional Random Fields (CRF) to segment the one-handed and two-handed signs of an isolated ISL and got an accuracy of 90 % and 86 % respectively. Sahoo & Ravulakollu [19] designed a recognition system to recognize isolated signs using the K-Nearest Neighbour (KNN) and ANN classifier. They have achieved 95 % accuracy for the single-handed signs and 96% for the double-handed signs. Singh et al. [20] decomposed single and double handed features using Histogram of Gradients (HOG) and geometric descriptors and classified them using the Support Vector Machine (SVM) and ANN. Their system produced an accuracy of 94.23 %. Gangrade et al. The authors [21] recognized ISL numbers from 0 to 9 using a bag of words and achieved an accuracy of 93.26 %. However, all the developed systems recognize only isolated words and include only manual features. It is mandatory for the SLR system to include both manual and non-manual parameters to produce an accurate result.

Let us further consider the actual ISL recognition. Bhuyan et al. [22] introduced a novel method for recognizing transition movements between the continuous signs in trajectory-based gesture recognition. They have used the concept of recognizing co-articulation point between fast and slow frames to separate the transition movements from sign gestures. Li & Greenspan [23] proposed a more efficient gesture segmentation method for continuous gesture recognition of the ISL using continuous dynamic programming and got an accuracy of 95 %. Bhuyan et al. [24] proposed a gesture trajectory model to identify the dynamic gestures and their approach achieved an accuracy of 95 %. Kishore & Kumar [25] designed an ISL system to recognize ISL gestures from videos under different complex backgrounds and achieved an accuracy of 96 % for 351 signs.

Nanivadekar et al. [26] proposed a step algorithm that takes into consideration the motion tracking, pattern recognition and hand tracking. Their system worked on videos of dynamic gestures but they failed to consider the phrases and facial movements. Kishore et al. [27] proposed a 4-Camera model to segment the hand gestures using the features obtained from the elliptical Fourier descriptors and classified them using ANN. The recognition rate of their system was about 92.23 %. Tripathi et al. [28] separated the continuous gestures using the gradient method by calculating the gradient for each frame and overlapping between continuous frames are also checked. Prasad et al. [29] developed the ISLR system with a recognition rate of 92.34 % for the 80 self-collected video sequences consisting of 59 letters and numbers, and 20 words. Athira et al. [28] developed a signer independent ISLR model with finger spelling alphabets and dynamic single-handed signs with a recognition accuracy of 91 % and 89 % respectively.

Let us consider the particulars and of Vision and Sensor Based Recognition. Most researchers focused on vision based approaches to recognize the ISL. Rekha et al. [15] used Wavelet Packet Decomposition and Principal Curvature as Region Detectors for recognizing ISL hand postures and produced an accuracy of 93.1 %. Bhuyan & Bora [30] recognized dynamic and static gestures of the ISL with the aid of hand oriented video abstraction technique based on hand shapes, trajectory and hand motion. Agarwal et al. [31] adapted a feature fusion algorithm for recognizing gestures by extracting Geometric Features, HOG and Scale Invariant Feature Transform (SIFT) features and achieved an accuracy of 93 %. Joshi et al. [32] considered manual features as essential and the focus is particularly on the boundary of the shape. It is observed that the enhancement is achieved only up to a particular level, therefore the accuracy saturates at higher orders. Kumar et al. [33] analyzed the performance of the combination of different feature vectors and Kaur et al. [34] proposed the ISL recognition system that acquired a high accuracy for a feature vector of size 638. The larger size of feature vector imposes problems in terms of memory space requirement and time to process the feature vector.

Mehrotra et al. [20] recognized 37 ISL signs and attained an accuracy of 86.16 % based on 3D skeleton point features from the Kinect sensor using SVM. Raheja et al. [35] proposed an ISLR system based on depth based information using Kinect and used SVM for classifying the signs. Kumar et al. [36] proposed a multimodal framework using the Leap Motion Controller (LMC) and a Kinect sensor for SLR based on classifiers combines both sensors to detect the signs. They have recognized 50 ISL signs and attained an accuracy of 40.23 % for all gestures. Joshi et al. [37] designed a unimodal feature fusion that helps in minimising the feature vector size, as well as enhances performance for all the datasets but fails in recognizing the Indian Sign Language (ISL) complex background dataset. Raghuveera et al. [38] proposed an ensemble method to recognize ISL single-handed signs, double-handed signs and finger spelling signs of 4,600 images and got an accuracy of 71.85 %. Moreover, all these sensors have their advantages in terms of low cost and drawbacks related to motion data. However, all these methods used sensors to recognize the signs and still addressing epenthesis movements are open challenges.

Let us consider the RSL recognition. The research in this field [39, 40] started with translation of the Russian spoken language into the sign language using differential object marking [41, 42]. Lately, the translation approach has been adjusted together with the RSL recognition using dynamic programming [43] and Convolu-tional Neural Networks (CNN) [44]. However, these methods relied on the Kinect sensor for the recognition of the features [45]. In [46-48] the attempts are made to recognize RSL signs and the non-manual components. The general requirement is that the recognition algorithms must work in real time and recognize the signs as they unfold in the sign space [49].

4. SIGN CORPUS

For the Indian Sign Language, IITA-ROBITA ISL [50] there is one developed by the Indian Institute of Technology Allahabad, with 23 sign gestures signed by one native signer. A vocabulary of 140 symbols [51] created using 18 subjects consisting of 5041 images with most of double-handed gestures and another set of 24 static

hand shapes of ISL [52] are used. A dataset with 3000 images consisting of alphabets, numbers, words and emotions are used to recognize signs in different domains like sports and traffic symbols. However, the datasets cited in the research article are not made available to the public to download as an open source. The same author created a new dataset consisting of 100 sign sentences of ISL signed by two native signers and the preliminary work of pre-processing is going on in the collected data using the methodologies discussed in [53]. It was initially planned to proceed with the classification [54] but based on the state-of-the-art performance of deep learning technologies in today's perception systems of sign recognition, the classification will be done using the fine tuned Convolutional Neural Networks and the Long-Short Term Memory. Once the pre-processing and classification is conducted, the data will be made available as an open-source for other researchers to make use of it.

Application of the deep learning techniques that currently show their effectiveness in general visual recognition remains problematic for SLs, due to the limited availability of labeled datasets. Compared to speaking languages they are scarce, and for some national sign languages are virtually non-existent. One of the directions of our work involves creation of SL dataset for the Russian language, for which we have already collected 927 video image files with static one-handed signs (Fig. 3). Each gesture is shown by 2-3 different people with 5 repetitions. After converting the video files to JSON using the Open Pose library and analyzing 21 points of the right hand skeletal model, we obtained the point selection confidence of 0.61. So, in Fig. 4 the initial gesture, its markup in Open Pose and estimates of the probability of choosing each of the 21 points of the skeletal model of the finger joints are presented. This accuracy confidence so far remains insufficient to recognize one-handed static gestures, so we are considering other approaches, as described in the current paper.

D ffl <M (К И О Я» «0 №0 EDO О XW «0 GCO EDO

Fig. 3. An extract from the dataset with one-handed signs we are developing

for the Russian SL

Рис. 3. Примеры из набора данных с одноручными жестами, которые мы разрабатываем для русского SL

Fig. 4. The procedure of Open Pose detection for assessing the probability (confidence) of choosing 21 points of the skeletal model of finger joints

Рис. 4. Процедура обнаружения OpenPose для оценки вероятности (достоверности) выбора 21 точки скелетной модели суставов пальцев

During the development of the dataset for the Russian language, the similarity of gestures of the Russian sign language and the Indian sign language was revealed. Some of them are presented in Table. 1.

5. THE BIO-SIMILAR NEURAL NETWORK APPROACH

The recognition of individual signs and SLs in general is challenging due to the need to perform fine tracking of various sign components that are made rapidly and are further complicated by the overlapping of hands, face, etc. To solve this problem, the approach based on bio-similar neural networks seems particularly promising [55].

The visual cortex in the human brain is responsible for the processing of the visual information [56] and includes 5 zones, whose functioning can be roughly described as follows:

• V1 is the identification of the lines, with the mechanism being functionally similar to the Gabor filter [57];

• V2 is the construction of the edges;

• V3 is the detection of movements;

• V4 is the identification of geometric forms;

• V5 is the detection of the direction and movement speed of the objects.

It should be noted that the numbers above do not reflect the actual order of the signal processing as the zones have both direct and reverse interconnections.

The dorsal and versal paths for the signals distribution in the cortex are special. The former goes through the V1, V2 and V5 zones being responsible for the spatial judgments and assessments. The latter goes through V1, V2 and V4, and it's considered to be related to the recognition of the form, the comprehension of the object and the long-term memory [58].

Currently, we perform testing of the network proposed by A.V. Kugaevskih on the video files of the RSL dataset. The architecture of the network (the model of the neurons) is organized similarly to the visual cortex in the human brain.

Table 1 Таблица 1

Similar gestures in RSL and ISL Похожие жесты в RSL и ISL

ISL

RSL

Gesture name

Start of gesture

End of gesture

Start of gesture

End of gesture

Book (Книга)

Man (Мужчина)

ЕЯ

Internet (Интернет)

Clean (Школа)

Child (Низкий)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

©

CONCLUSION

In our paper, we overview the available methods for the recognition of both individual signs and sign languages in general. The study has been performed for Indian and Russian sign languages. We have also covered the structure of the individual signs recognition system, which involves five components: the hand shape, orientation, localization, movement and non-manual markers. We also consider the available datasets for the two languages, particularly the static signs of the RSL.

Finally, we briefly describe the novel approach towards prompt and accurate recognition of various sign components based on the bio-similar neural network.

Funding: The reported study was funded by RFBR and DST according to the research project No. 19-57-45006.

REFERENCES

1. Indian Sign Language Research and Training Centre (ISLRTC). History. Available at: http://www.islrtc.nic.in/history-0 (accessed 13.10.2020).

2. Dasgupta T., Shukla S., Kumar S., Diwakar S., Basu A. A multilingual multimedia Indian sign language dictionary tool. The 6th Workshop on Asian Language Resources (ALR 6): Proceedings of the Workshop, Hyderabad, India, 2008, pp. 57-64.

3. ISL dictionary launch. Indian Sign Language Research and Training Centre. Available at: http://www.islrtc.nic.in/isl-dictionary-launch (accessed 13.10.2020).

4. Tavari N.V., Deorankar A.V., Chatur P.N. Hand gesture recognition of Indian sign language to aid physically impaired people. International Journal of Engineering Research and Applications,

2014, Spec. iss. ICIAC, vol. 5, pp. 60-66.

5. Vasishta M., Woodward J., Santis S. de. An introduction to Indian sign language: (Focus on Delhi). New Delhi, India, All India Fedeartion of the Deaf, 1980. 176 p.

6. Indian sign language dictionary. Available at: http://indiansignlanguage.org/dictionary/ (accessed 13.10.2020)

7. Korolkova O.O. Determining the scope of the "Complete Dictionary of Russian Sign Language". Sovremennye issledovaniya sotsial'nykh problem = Modern Studies of Social Issues, 2014, no. 3 (19), pp. 69-74. (In Russian).

8. Video dictionary of Russian sign language. Institute of Social Rehabilitation of NSTU: website. Novosibirsk, 2011. (In Russian). Available at: http://www.nisor.ru/snews/oa-/ (accessed 13.10.2020).

9. Geil'man I.F. Spetsificheskie sredstva obshcheniya glukhikh: daktilologiya i mimika. Ch. 1-4 [Specific means of deaf communication: dactylology and mimicry. Pt. 1-4]. Leningrad, 1975-1979.

10. Bazoev V.Z. et al. Slovar' russkogo zhestovogo yazyka [Dictionary of Russian sign language]. Moscow, Flinta Publ., 2009. 525 p.

11. Fradkina R.N. Govoryashchie ruki: tematicheskii slovar' zhestovogo yazyka glukhikh Rossii [Talking hands. Thematic dictionary of sign language for the deaf in Russia]. Moscow, MosgorVOG Publ., 2001.598 p.

12. Korolkova O.O. Osobennosti omonimii i polisemii v russkom zhestovom yazyke (na mate-riale videoslovarya russkogo zhestovogo yazyka) [Features homonymy and polysemy in russian sign language (based on videodictionary of russian sign language)]. V mire nauchnykh otkrytii = In the world of scientific discoveries, 2013, no. 5-1 (41), pp. 169-184.

13. Korolkova O.O. Osobennosti zhestov russkogo zhestovogo yazyka, nazvaniyami kotorykh yavlyayutsya omonimy russkogo yazyka [Features gestures Russian sign language, the name of which is a homonym Russian language]. V mire nauchnykh otkrytii = In the world of scientific discoveries,

2015, no. 7-8 (67), pp. 2931-2942.

14. Tripathi K., Baranwal N., Nandi G.C. Continuous dynamic Indian Sign Language gesture recognition with invariant backgrounds. 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Kochi, India, 2015, pp. 2211-2216.

15. Rekha J., Bhattacharya J, Majumder S. Shape, texture and local movement hand gesture features for indian sign language recognition. 3rd International Conference on Trendz in Information Sciences & Computing (TISC2011), Chennai, India, 2011, pp. 30-35.

16. Lilha H., Shivmurthy D. Evaluation of features for automated transcription of dual-handed sign language alphabets. 2011 International Conference on Image Information Processing, Shimla, India, 2011, pp. 1-5.

17. Adithya V., Vinod P.R., Gopalakrishnan U. Artificial neural network based method for Indian sign language recognition. 2013 IEEE Conference on Information & Communication Technologies, Thuckalay, Tamil Nadu, India, 2013, pp. 1080-1085.

18. Dixit K., Jalal A.S. Automatic Indian sign language recognition system. 2013 3rd IEEE International Advance Computing Conference (IACC), Ghaziabad, India, 2013, pp. 883-887.

19. Sahoo A.K., Ravulakollu K.K. Vision based Indian sign language character recognition. Journal of Theoretical & Applied Information Technology, 2014, vol. 67, iss. 3.

20. Singh A., Arora S., Shukla P., Mittal A. Indian Sign Language gesture classification as single or double handed gestures. 2015 Third International Conference on Image Information Processing (ICIIP), Waknaghat, India, 2015, pp. 378-381.

21. Gangrade J., Bharti J., Mulye A. Recognition of Indian Sign Language using ORB with bag of visual words by Kinect Sensor. IETE Journal of Research, 2020, 15 March, pp. 1-5. DOI: 10.1080/03772063.2020.1739569.

22. Bhuyan M.K., Ghosh D., Bora P.K. Continuous hand gesture segmentation and co-articulation detection. Computer vision, graphics and image processing: 5th Indian conference, ICVGIP 2006, Madurai, India, December 13-16, 2006: proceedings. Berlin, New York, Springer, 2006, pp. 564-575.

23. Li H., Greenspan M. Segmentation and recognition of continuous gestures. 2007 IEEE International Conference on Image Processing, 2007, vol. 1, pp. I-365-I-368.

24. Bhuyan M.K., Bora P.K., Ghosh D. Trajectory guided recognition of hand gestures having only global motions. World Academy of Science, Engineering and Technology, 2008, vol. 2, no. 9, pp. 2012-2023.

25. Kishore P.V., Kumar P.R. Segment, track, extract, recognize and convert sign language videos to voice/text. International Journal of Advanced Computer Science and Applications, 2012, vol. 3, no. 6, pp. 35-47.

26. Nanivadekar P.A., Kulkarni V. Indian sign language recognition: database creation, hand tracking and segmentation. 2014 International Conference on Circuits, Systems, Communication and Information Technology Applications (CSCITA), Mumbai, India, 2014, pp. 358-363.

27. Kishore P.V., Prasad M.V., Prasad C.R., Rahul R. 4-Camera model for sign language recognition using elliptical fourier descriptors and ANN. 2015 International Conference on Signal Processing and Communication Engineering Systems, Guntur, India, 2015, pp. 34-38.

28. Athira P.K., Sruthi C.J., Lijiya A. A signer independent sign language recognition with co-articulation elimination from live videos: an Indian scenario. Journal of King Saud University - Computer and Information Sciences, 2019. DOI: 10.1016/j.jksuci.2019.05.002.

29. Prasad M.V., Kishore P.V., Kumar E.K., Kumar D.A. Indian sign language recognition system using new fusion based edge operator. Journal of Theoretical & Applied Information Technology, 2016, vol. 88 (3), pp. 574-584.

30. Bhuyan M.K., Ghosh D., Bora P.K. A frame work of hand gesture recognition with applications to sign language. 2006Annual IEEE India Conference, New Delhi, India, 2006, pp. 1-6.

31. Agrawal S.C., Jalal A.S., Bhatnagar C. Recognition of Indian Sign Language using feature fusion. 2012 4th International Conference on Intelligent Human Computer Interaction (IHCI), Kha-ragpur, India, 2012, pp. 1-5.

32. Joshi G., Vig R., Singh S. Analysis of Zernike moment-based features for sign language recognition. Intelligent Communication, Control and Devices. Singapore, Springer, 2018, pp. 13351343.

33. Kumar D.A., Sastry A.S., Kishore P.V., Kumar E.K., Kumar M.T. S3DRGF: spatial 3-D relational geometric features for 3-D sign language representation and recognition. IEEE Signal Processing Letters, 2019, vol. 26 (1), pp. 169-173.

34. Kaur B., Joshi G., Vig R. Identification of ISL alphabets using discrete orthogonal moments. Wireless Personal Communications, 2017, vol. 95 (4), pp. 4823-4845.

35. Raheja J.L., Mishra A., Chaudhary A. Indian Sign Language recognition using SVM 1. Pattern Recognition and Image Analysis, 2016, vol. 26 (2), pp. 434-441.

36. Kumar P., Gauba H., Roy P.P., Dogra D.P. A multimodal framework for sensor based sign language recognition. Neurocomputing, 2017, vol. 259, pp. 21-38.

37. Joshi G., Vig R., Singh S. DCA-based unimodal feature-level fusion of orthogonal moments for Indian sign language dataset. IET Computer Vision, 2018, vol. 12 (5), pp. 570-577.

38. Raghuveera T., Deepthi R., Mangalashri R., Akshaya R. A depth-based Indian Sign Language recognition using Microsoft Kinect. Sadhana, 2020, vol. 45, no. 1, p. 34.

39. Grif M.G., Prihodko A.L. Approach to the Sign language gesture recognition framework based on HamNoSys analysis. Actual Problems of Electronic Instrument Engineering (APEIE-2018): proceedings, Novosibirsk, 2018, vol. 1, pt. 4, pp. 426-429. DOI: 1109/APEIE.2018.8545086.

40. Grif M.G., Lukoyanychev A.V. Gesture localization in the test mode in the integral system of sign language training. Journal of Physics: Conference Series, 2019, vol.1333, p. 032023.

41. Borstell C. Differential object marking in sign languages. Glossa: a Journal of General Linguistics, 2019, vol. 4 (1).

42. Polinsky M. Sign languages in the context of heritage language: a new direction in language research. Sign Language Studies, 2018, vol. 18 (3), pp. 412-428.

43. Ryumin D., Karpov A.A. Towards automatic recognition of sign language gestures using kinect 2.0. International Conference on Universal Access in Human-Computer Interaction. Cham, Springer, 2017, pp. 89-101.

44. Gruber I., Ryumin D., Hruz M., Karpov A. Sign language numeral gestures recognition using convolutional neural network. Interactive Collaborative Robotics. Cham, Springer, 2018, pp. 70-77.

45. Rozaliev V.L. Avtomatizatsiya raspoznavaniya kistei ruk cheloveka s pomoshch'yu Kinect dlya perevoda zhestovogo yazyka [Automated recognition of the hands of the person with Kinect for funds sign language]. Izvestiya Volgogradskogo gosudarstvennogo tekhnicheskogo universiteta = Izvestia of Volgograd State Technical University, 2015, no. 6 (163), pp. 74-78.

46. Dorofeev N.S., Rozaliev V.L., Orlova Yu.A., Soloshenko A.N. Raspoznavaniya daktil'nykh zhestov russkogo yazyka glukhikh [Recognition of fingerprints of the deaf Russian language]. Izvesti-ya Volgogradskogo gosudarstvennogo tekhnicheskogo universiteta = Izvestia of Volgograd State Technical University, 2013, no. 14 (117), pp. 42-45.

47. Konstantinov V.M., Orlova Yu.A., Rozaliev V.L. Razrabotka 3D-modeli tela cheloveka s ispol'zovaniem MS Kinect [Development of a 3D model of the human body using MS Kinect]. Izvestiya Volgogradskogo gosudarstvennogo tekhnicheskogo universiteta = Izvestia of Volgograd State Technical University, 2015, no. 6 (163), pp. 65-69.

48. Klimov A.S., Rozaliev V.L., Orlova Yu.A. Avtomatizatsiya postroeniya ob"emnoi modeli golovy cheloveka [Automation of the construction of a three-dimensional model of the human head]. Izvestiya Volgogradskogo gosudarstvennogo tekhnicheskogo universiteta = Izvestia of Volgograd State Technical University, 2014, no. 25(152), pp. 67-71.

49. Fan N.Kh., Spitsyn V.G. Raspoznavanie formy ruki na videoposledovatel'nosti v rezhime real'nogo vremeni na osnove Surf-deskriptorov i neironnoi seti [Hand shape recognition on real-time video sequences based on Surf descriptors and a neural network]. Elektromagnitnye volny i elektronnye sistemy = Electromagnetic Waves and Electronic Systems, 2012, vol. 17, no. 7, pp. 31-39.

50. IIITA-ROBITA Indian Sign Language Gesture Database. Available at: https://robita.iiita.ac.in/dataset.php (accessed 14.10.2020).

51. Ansari Z.A., Harit G. Nearest neighbour classification of Indian sign language gestures using Kinect camera. Sadhana, 2016, vol. 41 (2), pp. 161-182.

52. Singha J., Das K. Recognition of Indian sign language in live video. arXiv preprint, arXiv:1306.1301, 2013.

53. Elakkiya R., Vanitha V. Interactive real time fuzzy class level gesture similarity measure based sign language recognition using artificial neural networks. Journal of Intelligent & Fuzzy Systems, 2019, vol. 37, no. 5, pp. 6855-6864.

54. Elakkiya R., Selvamani K. Enhanced dynamic programming approach for subunit modelling to handle segmentation and recognition ambiguities in sign language. Journal of Parallel and Distributed Computing, 2018, vol. 117, pp. 246-255.

55. Kugaevskikh A.V., Sogreshilin A.A. Analyzing the efficiency of segment boundary detection using neural networks. Optoelectronics Instrumentation and Data Processing, 2019, vol. 55, no. 4, pp. 414-422. DOI: 10.3103/S8756699019040137.

56. Visual cortex. WikipediA. Available at: https://en.wikipedia.org/wiki/Visual_cortex (accessed 14.10.2020).

57. Jones J.P., Palmer L.A. An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 1987, vol. 58(6), pp. 1233-1258.

58. Two-streams hypothesis. WikipediA. Available at: https://en.wikipedia.org/wiki/Two-streams_hypothesis (accessed 14.10.2020).

Elakkiya R., AP-III, School of Computing, CSE, SASTRA Deemed University, India. His research interests include methods for recognizing sign languages of the deaf. He has published more than 30 scientific and educational works. E-mail: elakkiya@cse.sastra.edu

Grif Mikhail G., professor at the Department of Automated Control Systems, Faculty Automation and Computer Engineering, Novosibirsk State Technical University. His research interests: design of complex systems and computer sign language translation systems. He has published more than 300 scientific and educational works. E-mail: grif@corp.nstu.nstu.ru

Prikhodko Alexey L., junior researcher at the Department of Automated Control Systems, Faculty of Automation and Computer Engineering, Novosibirsk State Technical University. His research interests include methods for recognizing sign languages of the deaf. He has published 15 scientific papers. E-mail: alexeyayay@yandex.ru

Bakaev Maxim А., associate professor at the Department of Automated Control Systems, Faculty of Automation and Computer Engineering, Novosibirsk State Technical University. His research interests include human-computer interaction, interface design, machine learning. He has over 100 publications. E-mail: bakaev@corp.nstu.ru

Элаккия Р., AP-III, Школа компьютерных технологий, CSE, Университет SASTRA Deemed, Индия. Научные интересы - методы распознавания жестовых языков глухих. Опубликовано более 30 научных и учебных работ. E-mail: elakkiya@cse.sastra.edu

Гриф Михаил Геннадьевич, профессор кафедры автоматизированных систем управления факультета автоматики и вычислительной техники Новосибирского государственного технического университета. Сфера научных интересов - проектирование сложных систем и систем компьютерного перевода на язык жестов. Опубликовано более 300 научных и учебных работ. E-mail: grif@corp.nstu.nstu.ru

Приходько Алексей Леонидович, младший научный сотрудник кафедры автоматизированных систем управления факультета автоматики и вычислительной техники Новосибирского государственного технического университета. Научные интересы - методы распознавания жестовых языков глухих. Опубликовано 15 научных работ. E-mail: alexeyayay@yandex.ru

Бакаев Максим Александрович, доцент кафедры автоматизированных систем управления факультета автоматики и вычислительной техники Новосибирского государственного технического университета. Научные интересы: взаимодействие человека с компьютером, дизайн интерфейса, машинное обучение. Имеет более 100 публикаций. E-mail: bakaev@corp.nstu.ru

DOI: 10.17212/1814-1196-2020-2-3-57-76 Распознавание русского и индийского языков жестов глухих*

P. ЭЛАККИЯ1,а, М.Г. ГРИФ2,Ь, А.Л. ПРИХОДЬКО2,0, М.А. БАКАЕВ24

1 613401, Индия, Танджавур, Тамил Наду, Университет SASTRA, PhD, AP-III, Школа компьютерных технологий, CSE

2 630073, РФ, г. Новосибирск, пр. Карла Маркса, 20, Новосибирский государственный технический университет

а elakkiya@cse.sastra.edu Ь grif@corp.nstu.ru 0 alexeyayay@yandex.ru d bakaev@corp.nstu.ru

Аннотация

Рассматриваются подходы к распознаванию жестовых языков глухих на примере русского и индийского жестовых языков. Предлагается структура системы распознавания отдельных жестов на основе выявления пяти его компонент - конфигурации, ориентации' локализации' движения и немануальных маркеров. Приведен анализ применяемых методов распознавания отдельных жестов и непрерывной жестовой речи для индийского и русского языков жестов. Рассматривается проблема построения корпусов жестовых языков, а также наборов обучающих данных (Датасет). Отмечается сходство отдельных жестов русского и индийского жестовых языков. Приводится структура локального Датасет для статичных жестов русского жестового языка. Было собрано 927 файлов видеоизображений со статическими одноручными жестами. После преобразования видеофайлов в формат JSON с использованием библиотеки OpenPose и анализа 21 точек скелетной модели правой руки была получена достоверность выбора точек 0,61. Делается вывод, что эта достоверность является недостаточной. Отмечается, что распознавание отдельных жестов глухих и жестовой речи в целом осложнено необходимостью точного отслеживания различных компонентов жестов, которые выполняются достаточно быстро и осложнены перекрытием рук, лица. Для решения этой задачи предлагается подход, связанный с разработкой биологически подобной нейронной сети. Моделируемая нейронная сеть должна проводить обработку визуальной информации аналогично коре головного мозга человека: идентификация линий, построение ребер, обнаружение движений, идентификация геометрических форм, определение направления и скорости движения объектов. В настоящее время мы проводим тестирование биологически подобной нейронной сети, предложенной А.В. Кугаевских, на видеофайлах обучающих данных русского жестового языка.

Ключевые слова: русский жестовый язык, индийский жестовый язык, распознавание жестов, компоненты жестов глухих, искусственная нейронная сеть, машинное обучение, наборы обучающих данных

СПИСОК ЛИТЕРАТУРЫ

1. Indian Sign Language Research and Training Centre (ISLRTC). History. - URL: http://www.islrtc.nic.in/history-0 (accessed: 13.10.2020).

2. A multilingual multimedia Indian sign language dictionary tool / T. Dasgupta, S. Shukla, S. Kumar, S. Diwakar, A. Basu // The 6th Workshop on Asian Language Resources (ALR 6): Proceedings of the Workshop. - Hyderabad, India, 2008. - P. 57-64.

3. ISL dictionary launch / Indian Sign Language Research and Training Centre. - URL: http://www.islrtc.nic.in/isl-dictionary-launch (accessed: 13.10.2020). - Пер. загл.: Запуск словаря ISL.

Статья получена 15 января 2020 г.

4. TavariN.V., DeorankarA.V., ChaturP.N. Hand gesture recognition of Indian sign language to aid physically impaired people // International Journal of Engineering Research and Applications. -2014. - Spec. iss. ICIAC, vol. 5. - P. 60-66.

5. Vasishta M., Woodward J., Santis S. de. An introduction to Indian sign language: (Focus on Delhi). - New Delhi, India: All India Fedeartion of the Deaf, 1980. - 176 p.

6. Indian sign language dictionary. - URL: http://indiansignlanguage.org/dictionary/ (accessed: 13.10.2020). - Пер. загл.: Словарь индийского языка жестов.

7. Королькова О.О. Определение объема «Полного словаря русского языка жестов» // Современные исследования социальных проблем. - 2014. - № 3 (19). - С. 69-74.

8. Видео словарь русского языка жестов // Институт социальной реабилитации НГТУ: сайт. - Новосибирск, 2011. - URL: http://www.nisor.ru/snews/oa-/ (дата обращения: 13.10.2020).

9. Гейльман И.Ф. Специфические средства общения глухих: дактилология и мимика: в 4 ч. - Л.: ВОГ, 1975-1979. - 4 ч.

10. Словарь русского жестового языка / В.З. Базоев и др. - М.: Флинта, 2009. - 525 с.

11. Фрадкина Р.Н. Говорящие руки: тематический словарь жестового языка глухих России. - М.: МосгорВОГ, 2001. - 598 с.

12. Королькова О.О. Особенности омонимии и полисемии в русском жестовом языке (на материале видеословаря русского жестового языка) // В мире научных открытий. - 2013. -№ 5-1 (41). - С. 169-184.

13. Королькова О.О. Особенности жестов русского жестового языка, названиями которых являются омонимы русского языка // В мире научных открытий. - 2015. - № 7-8 (67). -С. 2931-2942.

14. Tripathi K, Baranwal N, Nandi GC. Continuous dynamic Indian Sign Language gesture recognition with invariant backgrounds // 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI). - Kochi, India, 2015. - P. 2211-2216.

15. Rekha J., Bhattacharya J., MajumderS. Shape, texture and local movement hand gesture features for Indian sign language recognition // 3rd International Conference on Trendz in Information Sciences & Computing (TISC 2011). - Chennai, India, 2011. - P. 30-35.

16. Lilha H., ShivmurthyD. Evaluation of features for automated transcription of dual-handed sign language alphabets // 2011 International Conference on Image Information Processing. - Shimla, India, 2011. - P. 1-5.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

17. Adithya V., Vinod P.R., Gopalakrishnan U. Artificial neural network based method for Indian sign language recognition // 2013 IEEE Conference on Information & Communication Technologies. - Thuckalay, Tamil Nadu, India, 2013. - P. 1080-1085.

18. DixitK., JalalA.S. Automatic Indian sign language recognition system // 2013 3rd IEEE International Advance Computing Conference (IACC). - Ghaziabad, India, 2013. - P. 883-887.

19. Sahoo A.K., Ravulakollu K.K. Vision based Indian sign language character recognition // Journal of Theoretical & Applied Information Technology. - 2014. - Vol. 67, iss. 3.

20. Indian Sign Language gesture classification as single or double handed gestures / A. Singh, S. Arora, P. Shukla, A. Mittal // 2015 Third International Conference on Image Information Processing (ICIIP). - Waknaghat, India, 2015. - P. 378-381.

21. Gangrade J., Bharti J., Mulye A. Recognition of Indian Sign Language using ORB with bag of visual words by Kinect Sensor // IETE Journal of Research. - 2020. - 15 March. - P. 1-5. -DOI: 10.1080/03772063.2020.1739569.

22. Bhuyan M.K., Ghosh D., Bora P.K. Continuous hand gesture segmentation and co-articulation detection // Computer vision, graphics and image processing: 5th Indian conference, ICVGIP 2006, Madurai, India, December 13-16, 2006: proceedings. - Berlin; New York: Springer, 2006. - P. 564-575.

23. LiH., GreenspanM. Segmentation and recognition of continuous gestures // 2007 IEEE International Conference on Image Processing. - 2007. - Vol. 1. - P. I-365-I-368.

24. Bhuyan M.K., Bora P.K., Ghosh D. Trajectory guided recognition of hand gestures having only global motions // World Academy of Science, Engineering and Technology. - 2008. - Vol. 2, N 9. - P.2012-2023.

25. Kishore P.V., Kumar P.R. Segment, track, extract, recognize and convert sign language videos to voice/text // International Journal of Advanced Computer Science and Applications. - 2012. -Vol. 3, N 6. - P. 35-47.

26. NanivadekarP.A., Kulkarni V. Indian sign language recognition: database creation, hand tracking and segmentation // 2014 International Conference on Circuits, Systems, Communication and Information Technology Applications (CSCITA). - Mumbai, India, 2014. - P. 358-363.

27. 4-Camera model for sign language recognition using elliptical Fourier descriptors and ANN / P.V. Kishore, M.V. Prasad, C.R. Prasad, R. Rahul // 2015 International Conference on Signal Processing and Communication Engineering Systems. - Guntur, India, 2015. - P. 34-38.

28. Athira P.K., Sruthi C.J., Lijiya A. A signer independent sign language recognition with co-articulation elimination from live videos: an Indian scenario // Journal of King Saud University -Computer and Information Sciences. - 2019. - DOI: 10.1016/j.jksuci.2019.05.002.

29. Indian sign language recognition system using new fusion based edge operator / M.V. Prasad, P.V. Kishore, E.K. Kumar, D.A. Kumar // Journal of Theoretical & Applied Information Technology. - 2016. - Vol. 88 (3). - P. 574-584.

30. Bhuyan M.K., Ghosh D., Bora P.K. A frame work of hand gesture recognition with applications to sign language // 2006 Annual IEEE India Conference. - New Delhi, India, 2006. - P. 1-6.

31. AgrawalS.C., JalalA.S., Bhatnagar C. Recognition of Indian Sign Language using feature fusion // 2012 4th International Conference on Intelligent Human Computer Interaction (IHCI). -Kharagpur, India, 2012. - P. 1-5.

32. Joshi G., VigR., Singh S. Analysis of Zernike moment-based features for sign language recognition // Intelligent Communication, Control and Devices. - Singapore: Springer, 2018. -P. 1335-1343.

33. S3DRGF: spatial 3-D relational geometric features for 3-D sign language representation and recognition / D.A. Kumar, A.S. Sastry, P.V. Kishore, E.K. Kumar, M.T. Kumar // IEEE Signal Processing Letters. - 2019. - Vol. 26 (1). - P. 169-173.

34. KaurB., Joshi G., VigR Identification of ISL alphabets using discrete orthogonal moments // Wireless Personal Communications. - 2017. - Vol. 95 (4). - P. 4823-4845.

35. Raheja J.L., MishraA., ChaudharyA. Indian Sign Language recognition using SVM 1 // Pattern Recognition and Image Analysis. - 2016. - Vol. 26 (2). - P. 434-441.

36. A multimodal framework for sensor based sign language recognition / P. Kumar, H. Gauba, P.P. Roy, D.P. Dogra // Neurocomputing. - 2017. - Vol. 259. - P. 21-38.

37. Joshi G., Vig R., Singh S. DCA-based unimodal feature-level fusion of orthogonal moments for Indian sign language dataset // IET Computer Vision. - 2018. - Vol. 12 (5). - P. 570-577.

38. A depth-based Indian Sign Language recognition using Microsoft Kinect / T. Raghuveera, R. Deepthi, R. Mangalashri, R. Akshaya // Sadhana. - 2020. - Vol. 45, N 1. - P. 34.

39. Grif M.G., PrihodkoA.L. Approach to the Sign language gesture recognition framework based on HamNoSys analysis // Actual Problems of Electronic Instrument Engineering (APEIE-2018): proceedings. - Novosibirsk, 2018. - Vol. 1, pt. 4. - P. 426-429. - DOI: 1109/APEIE.2018.8545086.

40. Grif M.G., LukoyanychevA.V. Gesture localization in the test mode in the integral system of sign language training // Journal of Physics: Conference Series. - 2019. - Vol. 1333. - P. 032023.

41. Borstell C. Differential object marking in sign languages // Glossa: a Journal of General Linguistics. - 2019. - Vol. 4 (1).

42. PolinskyM. Sign languages in the context of heritage language: a new direction in language research // Sign Language Studies. - 2018. - Vol. 18 (3). - P. 412-428.

43. Ryumin D., Karpov A.A. Towards automatic recognition of sign language gestures using kinect 2.0 // International Conference on Universal Access in Human-Computer Interaction. - Cham: Springer, 2017. - P. 89-101.

44. Sign language numeral gestures recognition using convolutional neural network / I. Gruber, D. Ryumin, M. Hruz, A. Karpov // Interactive Collaborative Robotics. - Cham: Springer, 2018. -P. 70-77.

45. Розалиев В.Л. Автоматизация распознавания кистей рук человека с помощью Kinect для перевода жестового языка // Известия Волгоградского государственного технического университета. - 2015. - № 6 (163). - C. 74-78.

46. Распознавания дактильных жестов русского языка глухих / Н.С. Дорофеев, В.Л. Розалиев, Ю.А. Орлова, А.Н. Солошенко // Известия Волгоградского государственного технического университета. - 2013. - № 14 (117). - C. 42-45.

47. Константинов В.М., Орлова Ю.А., Розалиев В.Л. Разработка 3D-модели тела человека с использованием MS Kinect // Известия Волгоградского государственного технического университета. - 2015. - № 6 (163). - C. 65-69.

48. КлимовА.С., РозалиевВ.Л., Орлова Ю.А. Автоматизация построения объемной модели головы человека // Известия Волгоградского государственного технического университета. - 2014. - № 25 (152). - C. 67-71.

49. Фан Н.Х., Спицын В.Г. Распознавание формы руки на видеопоследовательности в режиме реального времени на основе Surf-дескрипторов и нейронной сети // Электромагнитные волны и электронные системы. - 2012. - T. 17, № 7. - С. 31-39.

50. IIITA-ROBITA Indian Sign Language Gesture Database. - URL: https://robita.iiita.ac.in/ dataset.php (accessed: 14.10.2020).

51. Ansari Z.A., Harit G. Nearest neighbour classification of Indian sign language gestures using Kinect camera // Sadhana. - 2016. - Vol. 41 (2). - P. 161-182.

52. Singha J., Das K. Recognition of Indian sign language in live video // arXiv preprint. -arXiv:1306.1301, 2013.

53. Elakkiya R., Vanitha V. Interactive real time fuzzy class level gesture similarity measure based sign language recognition using artificial neural networks // Journal of Intelligent & Fuzzy Systems. - 2019. - Vol. 37, N 5. - P. 6855-6864.

54. Elakkiya R., Selvamani K. Enhanced dynamic programming approach for subunit modelling to handle segmentation and recognition ambiguities in sign language // Journal of Parallel and Distributed Computing. - 2018. - Vol. 117. - P. 246-255.

55. KugaevskikhA.V., SogreshilinA.A. Analyzing the efficiency of segment boundary detection using neural networks // Optoelectronics Instrumentation and Data Processing. - 2019. - Vol. 55, N 4. - P. 414-422. - DOI: 10.3103/S8756699019040137.

56. Visual cortex // WikipediA. - URL: https://en.wikipedia.org/wiki/Visual_cortex (accessed: 14.10.2020).

57. Jones J.P., Palmer L.A. An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex // Journal of Neurophysiology. - 1987. - Vol. 58 (6). - P. 12331258.

58. Two-streams hypothesis // WikipediA. - URL: https://en.wikipedia.org/wiki/Two-streams_hypothesis (accessed: 14.10.2020).

Для цитирования:

Распознавание русского и индийского языков жестов глухих / P. Элаккия, М. Г. Гриф, А.Л. Приходько, М.А. Бакаев // Научный вестник НГТУ. - 2020. - № 2-3 (79). - С. 57-76. -DOI: 10.17212/1814-1196-2020-2-3-57-76. - Текст англ.

For citation:

Elakkiya R., Grif M.G., Prikhodko A.L., Bakaev M.A. Recognition of Russian and Indian sign languages used by the deaf people. Nauchnyi vestnik Novosibirskogo gosudarstvennogo tekhnich-eskogo universiteta = Science bulletin of the Novosibirsk state technical university, 2020, no. 2-3 (79), pp. 57-76. DOI: 10.17212/1814-1196-2020-2-3-57-76.

ISSN 1814-1196, http://journals.nstu.ru/vestnik Science Bulletin of the NSTU Vol. 79, No 2-3, 2020, pp. 57-76

i Надоели баннеры? Вы всегда можете отключить рекламу.