Научная статья на тему 'Retinal biometric identification using convolutional neural network'

Retinal biometric identification using convolutional neural network Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
222
53
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
blood vessels / convolutional neural network / identification / retina / segmentation

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Rodiah, Madenda S., Susetianingtias D.T., Fitrianingsih, Adlina D.

Authentication is needed to enhance and protect the system from vulnerabilities or weaknesses of the system. There are still many weaknesses in the use of traditional authentication methods such as PINs or passwords, such as being hacked. New methods such as system biometrics are used to deal with this problem. Biometric characteristics using retinal identification are unique and difficult to manipulate compared to other biometric characteristics such as iris or fingerprints because they are located behind the human eye thus they are difficult to reach by normal human vision. This study uses the characteristics of the retinal fundus image blood vessels that have been segmented for its features. The dataset used is sourced from the DRIVE dataset. The preprocessing stage is used to extract its features to produce an image of retinal blood vessel segmentation. The image resulting from the segmentation is carried out with a two-dimensional image transformation such as the process of rotation, enlargement, shifting, cutting, and reversing to increase the quantity of the sample of the retinal blood vessel segmentation image. The results of the image transformation resulted in 189 images divided with the details of the ratio of 80 % or 151 images as training data and 20 % or 38 images as validation data. The process of forming this research model uses the Convolutional Neural Network method. The model built during the training consists of 10 iterations and produces a model accuracy value of 98 %. The results of the model's accuracy value are used for the process of identifying individual retinas in the retinal biometric system.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Retinal biometric identification using convolutional neural network»

IMAGE PROCESSING, PATTERN RECOGNITION Retinal biometric identification using convolutional neural network

Rodiah1, Sarifuddin Madenda 2, Diana Tri Susetianingtias1, Fitrianingsih1, Dea Adlina1, Rini Arianty1 1 Departement of Informatics Gunadarma University, Margonda Raya Street Number 100, Pondok Cina, Depok, West Java, 16431, Indonesia;

2 Doctoral Program in Information Tech Gunadarma University, Margonda Raya Street Number 100, Pondok Cina, Depok, West Java, 16431, Indonesia

Abstract

Authentication is needed to enhance and protect the system from vulnerabilities or weaknesses of the system. There are still many weaknesses in the use of traditional authentication methods such as PINs or passwords, such as being hacked. New methods such as system biometrics are used to deal with this problem. Biometric characteristics using retinal identification are unique and difficult to manipulate compared to other biometric characteristics such as iris or fingerprints because they are located behind the human eye thus they are difficult to reach by normal human vision. This study uses the characteristics of the retinal fundus image blood vessels that have been segmented for its features. The dataset used is sourced from the DRIVE dataset. The preprocessing stage is used to extract its features to produce an image of retinal blood vessel segmentation. The image resulting from the segmentation is carried out with a two-dimensional image transformation such as the process of rotation, enlargement, shifting, cutting, and reversing to increase the quantity of the sample of the retinal blood vessel segmentation image. The results of the image transformation resulted in 189 images divided with the details of the ratio of 80 % or 151 images as training data and 20 % or 38 images as validation data. The process of forming this research model uses the Convolutional Neural Network method. The model built during the training consists of 10 iterations and produces a model accuracy value of 98 %. The results of the model's accuracy value are used for the process of identifying individual retinas in the retinal biometric system.

Keywords: blood vessels, convolutional neural network, identification, retina, segmentation.

Citation: Rodiah, Madenda S, Susetianingtias DT, Fitrianingsih, Adlina D, Arianty R. Retinal biometric identification using convolutional neural network. Computer Optics 2021; 45(6): 865-872. DOI: 10.18287/2412-6179-C0-890.

Acknowledgments: The work was partially funded by DP2M RistekDikti, Gunadarma University especially to the Gunadarma University Research Bureau for the opportunity to conduct research specifically in the field of Biometrics.

Introduction

Currently, individual identification is of great importance since many systems require legitimate users in access control, especially for systems that store valuable documents [1] and important data. One identification technology that is presently developing is biometric feature-based identification technology [2, 3]. The biometric identification system is a system that does identification and recognition using a biometric characteristic pattern [4, 5] that one's owned [6]. The biometric system-based individual identification technique that is developing today is fingerprints. This is because in a fingerprint there are about 40 unique characteristics [7], which enable the identification of about 1.1 trillion different individuals [8]. Apart from fingerprints, one of the body parts that can be used as a biometric system for identification is the retina. The retina is a sensitive eye organ and it functions in the ability to see. Aside from being used to see, the ret-

ina can be used as identification as it has unique characteristics. In the retinal tissue of the human eye, there are about 256 unique characteristics [9].

Research [10] implemented the use of a Neural Network to identify individuals based on retinal biometrics. This study uses the backpropagation algorithm which consists of 3 main layers, namely input, hidden, and output layers [11, 12]. The process of identifying retinal images uses a feed forward neural network which consists of input, hidden, output. In the hidden and output sections using the sigmod activation function. This study has a total of 233 retinal segmentation images with a resolution of 768 x 584 from a total of 139 individual retinal samples. Of the total number of retinas, 40 images were separated as test data. Excellent accuracy needs to be tested on the number of neurons in hidden layers, which starts from 8 hidden layers to 35 hidden layers. The highest accuracy is at 97.5 % with the number of hidden layers of 35 neurons.

KoMntrorepHaa om™, 2021, tom 45, №6 DOI: I0.18287/2412-6179-C0-890

865

The network is trained with a maximum total of 10000 epochs in each test.

Research [13] Identify biometric characteristics using deep learning. The deep learning system proposed in this study is MultiTraitConvNet whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminatory features from images with 5 classes, namely face, fingerprint, footprint, iris and fingerprint, respectively. Each class has 100 pictures of which 80 pictures are for training data and 20 pictures are for test data. The training procedure is carried out using the backpropagation algorithm with the Adam optimization method. The model consists of two layers which are fully connected for classification. The output of the first connected layer is fed into the softmax classifier (the last fully connected layer) which produces a probability distribution over the class N label. This architecture is built with image inputs resized to 64 x 64 pixels. The proposed model contains four layers of convolution and produces feature maps of 32, 64, 128, and 256 pixels respectively. Each convolution layer has a reluctant activation function to decide which neurons are activated at a certain time.

Research [14] identify the individuals' retina with Diabetic Retinopathy (DR) eye disease which was classified into 5 classes, namely No DR (0), Mild DR (1), Moderate DR (2), Severe DR (3), Proliferative DR (4). Research [7] uses the DR dataset provided by EYEPACS. The number of images in the dataset contains 88.702 images from 44.351 patients for the left and right eye. This study only took 35.126 images which were divided into training data and test data, each of which had a percentage of 80 % training data, 20 % of the test data used 3 CNN models with each using the ReLu activation function. This study addresses the problem of overfitting in pooling layers by using max pooling to reduce image dimensions [15, 16] and accelerate the learning process. After that, just do the classification into 5 types of DR.

Research [17] conduct a segmentation process for retinal blood vessels by implementing a CNN-based deep learning algorithm. This study evaluated the results in two aspects, namely the accuracy and sensitivity of the retinal blood vessel segmentation images. In its implementation, this study uses a built-in library from deep learning, namely the caffe library. The dataset used comes from the ARIA Dataset which contains 143 retinal fundus images with an initial resolution of 768 x 576. Each image shape is changed by a rotation process and also a flip from every angle in the image so that during the training process, the amount of data being trained becomes a lot with various shapes. Previously, the image resolution was reduced to 565 x 584 and then trained 10.000 times. The purpose of this study is to compare the accuracy and sensitivity of the two existing dataset sources, namely STARE dataset and DRIVE dataset, where these two types of datasets are processed by experts from various countries specifically for implementing the retinal segmentation process.

In this research, a biometric identification by retinal fundus image feature was carried out by utilizing the feature segmentation results of the retinal fundus image, specifically the retinal fundus image blood vessel pattern. The results of feature extraction are then used in the training and testing process with a convolutional neural network algorithm. This research will also calculate the accuracy value of the retinal identification process to observe the performance of the algorithm. The results of this research are expected to identify the retina through the retinal biometric system with a rapid process and high accuracy.

1. Methods

This research consists of several stages, firstly, the retinal fundus image collection stage and the preprocessing stage where the preprocessing operations of changing the RGB image to a green channel image and the Histogram leveling by adaptive histogram equalization method and the filtering process to remove non-blood vessel objects in the retinal fundus image are carried out. After that, in the feature extraction phase, a retinal fundus image segmentation operation is performed and then a rescaling process is done to change the output dimensions of the retinal fundus image. The next process is the process of forming a model using a convolutional neural network algorithm to identify the fundus image bi-ometric features. The process of forming a neural network model consists of two main processes, first is the training set process that is conducted to train the convolu-tional neural network layer that is built to be able to recognize biometric features of fundus images.

The he stages that follow after the training process is the testing set process that aims to test the results of the training model that has been done. The total retinal fundus image dataset divided into 9 classes where each class has 21 fundus images with different fields of view with angles of - 2, - 5, - 10, - 15, - 20, - 25, - 30, - 35, - 40, - 45, 0, 2, 5, 10, 15, 20, 25, 30, 35, 40, 45. The training and testing process uses an 8:2 ratio in the fundus image dataset. The final stage of retinal fundus identification is the processing of identification results where at this stage the accuracy of the identification process is calculated. Examples of retinal fundus images used can be seen in fig. 1.

1.1. Fundus retina image preprocessing

The initial stage is done by changing the image of the RGB into a green channel image. The selection of green channel as an image that will be used in image processing is because the image on the green channel has an intensity value that is neither too bright and nor too dark [19]. The result of green channel extraction then goes through incomplement process [20] which aims to invert the pixel intensity value in the image. The equation of the incomplement process can be seen in formula 1 where i is the result of intensity, x is the value of image

intensity at position x. The constant 255 is the highest intensity value in an image as follows :

i = 255 - x.

(1)

After obtaining an incomplement image, histogram leveling is carried out using adaptive histogram equalization to evenly increase the image intensity. The stage following the histogram equalization process in preprocessing is the filtering process. The filtering process aims to eliminate non-blood vessel objects in the retinal fundus image including optical disks and noise. The filtering process begins with the formation of the foreground or upper layer. This layer serves as a layer that will be used to eliminate non-blood vessel objects by the process of separating the foreground with the background image. The formation of the foreground layer is done by initializing a ball form layer with a size of 8 x 8. The foreground formation process is done by an open morphological process between the histogram equalization result image and a ball object of size 8 x 8.

Fig. 1. Retinal fundus image [18]

The separation between the foreground and the background of the fundus retinal image is done by reduction elimination between the of the histogram equalization image with the foreground image. The results of the filtering process with a median filter are then brought to the process of image edge sharpening with the imsharpen function. The parameters used in the edge sharpening process are the image sharpening radius with the value of 25 pixels and the the image sharpening with the value of 2 pixels. The stages of preprocessing can be seen in fig. 2.

1.2. Feature extraction

The feature extraction process is a process to obtain characteristic biometric features of retinal fundus images to be used in the fundus image identification process using the built neural network architecture. The characteristic feature used is the segmentation of blood vessels in the retinal fundus image. The method used in segmenting the blood vessels of the retinal fundus is using the thresholding process. The thresholding process results are used in the formation of binary images that have a value of 1

and 0 for each pixel. The area opening process is carried out for elimination in case there are still objects with pixel values below 100. The purpose of the area opening process is to centralize the results of blood vessel segmentation without the non-blood vessel objects. The feature extraction process results in the form of blood vessel segmentation of the fundus image can be seen in fig. 3.

Green channel image

Histogram equalization

Filtering image

Blood vessels extraction image

Fig. 2. Preprocessing steps

Binarization image wthout demising

Fig. 3. Results of retinal blood vessels extraction

Fig. 3 is a binary image needed for the process of blood vessel segmentation. The white pixels are the blood vessel and not a background. There fore it can't be inverted as the object (blood vessel) must be the foreground.

1.3. Convolutional neural network models design

The design of deep convolutional neural network models is used to identify biometric features in retinal fundus images. The model design uses two main libraries namely Keras as high-level neural networks API and Tensorflow as backend engines. The formation of a neural network model and the determination of each hyperparameter is the results of trials on each value and model to obtain a model with the highest accuracy value

Компьютерная оптика, 2021, том 45, №6 DOI: I0.18287/2412-6179-C0-890

867

and the lowest error rate. The following are the model formation phases for the identification of retinal biometrics using convolutional neural networks.

1.Define the model on Keras. This research uses a sequential model. Layer modules are used in neural network design. The optimizer module is used to optimize with the aim of minimizing error rates and maximizing the accuracy of the neural networks.

2.Conducting a rescaling process to change the image size of the blood vessel segmentation results. Rescal-ing is done with the aim of reducing load and computation in the neural network architecture training process. The results of the image rescaling process will be stored in a multi-dimensional matrix of size (189, 256, 256) where 189 is the total dataset used in this research and the value 256 is the image size of 256 x 256 pixels.

3.Labeling the dataset aims to represent images in the form of variable categories that can be read by the program in the dataset training process. Labeling each image is done by a one-hot encoding technique where each image class will have a value of 1 while the other bit values are 0. Each bit that has the value 1 in one-hot represents the class of the fundus retinal dataset. This study has a total dataset of 9 classes consequently there are 9 one-hot categories. One example in Tab. 1 is class 1 has a one-hot value label of 100000000.

Tab. 1. One hot label for each class

Class One-hot

Class 1 100000000

Class 2 010000000

Class 3 001000000

Class 4 000100000

Class 5 000010000

Class 6 000001000

Class 7 000000100

Class 8 000000010

Class 9 000000001

One hot encoding is a method used to represent variables. The target class is 9 classes which are 9 people with different binary code formation patterns. One hot label encoding is used because CNN cannot work automatically directly to categorize the person into 9 target classes. Then the categorical data will be converted into a number that can be recognized on the CNN model. For example in class 1 with One hot label: 100000000 recognized as target retina person 1, Class 2 with One Hot label: 010000000 recognized as target retina person 2 and so on until class 9. One hot label will adjust the changes in binary position according to the target class which will be recognized.

1.4. Convolutional neural network architecture

The formation of the architecture model convolution layer in this study has several layers where the first layer

is the input layer which has an input image size of 256x256 pixels. The parameter in each convolution process is the number of features that can be learned by neural network models in performing an identification. The parameter calculation process uses as follows :

Parameter = (hxwxc +1) xf. (2)

In formula 2, the parameter value in a convolution layer is determined by the kernel size (h x w), the number of channels (c) and the filter size (f value used. Parameter values calculation for the next convolution layer uses the same equation, formula 2. Based on equation 2, the parameter values contained in a convolutional layer can be determined from the size of the kernel size, the number of channels, and the filter value used. An example of the first convolutional layer found in the neural network Figure x uses a kernel size of 3 x 3 with a channel input of 3, and a filter size of 256.The calculation of parameters in the first convolutional layer can be seen in the following calculation (1)

Parameter = (3 x 3 x 256 +1) x 25,

Parameter = 5900080.

The following is the calculation of the parameters of each convolutional layer process in the neural network architecture that is made, it can be seen on tab. 2.

Tab. 2. Calculation result of convolution layer parameter

Layer The calculation Parameter value

1 (3 x 3 x 3 +1) x 256 7168

2 (3 x 3 x 256 +1) x 256 590080

3 (3 x 3 x 256 +1) x 128 295040

4 (3 x 3 x 128 +1) x 64 73792

5 (3 x 3 x 64 +1) x 32 18464

Weight output for each convolution process is a calculation between input image size, kernel size, padding and stride as hyperparameter. The weight output on a neural network layer as follows :

Weight = (W -F + 2P)/S +1. (3)

An example of calculating the weight value contained in the first convolutional layer with the initial image input value 224 x 224, the kernel filter value 3 x 3, the padding value 0, and the step value 1 can be seen in equation 3 where Weight = (224 - 3 + 2.0) / 1 +1 = 222. The calculation of the weight value of the merging layer uses the input value resulting from the previous convolution process, the value of the filter kernel 2 x 2, the value of padding 0, and the value of step 2 can be seen in equation 3 where Weight = (222 - 2 + 2.0) / 2 + 1 = 111.

The architecture that is built on all layers does not use padding or what is commonly called as zero padding. Tab. 3 is the calculation result of the weight value of each convolution layer process and the pooling layer in the neural network architecture created.

Tab. 3. Weight output

Layer Layer type Input Kernel filter Stride Weight value

1 Convolution 224 3 1 222

1 Maxpooling 222 2 2 111

2 Convolution 111 3 1 109

2 Maxpooling 109 2 2 54

3 Convolution 54 3 1 52

3 Maxpooling 52 2 2 26

4 Convolution 26 3 1 24

5 Convolution 24 3 1 22

The first convolution layer has a filter size of 256

with a 3 x 3 pixel kernel, and the input channel is 3. This first convolution process uses an image input measuring 244 x 244 pixels. The activation function is used for the entire convolution layer using ReLu. The next layer is batch normalization to keep the data from the first convolutional process. After that, there is a pooling layer with the max pooling type which has a 2 x 2 filter kernel. The purpose of this pooling is to reduce pixel dimensions in the image. Then the next layer is dropping out of school with a value of 0.5 to minimize excess losses from the previous layer processes. The formation of a convolu-tional neural network architecture in detail can be seen in tab. 4. As can be seen in tab. 4, the architectural process flow in each layer is more or less the same as the first convolutional layer. First convolutional layer then the normalization layer, then the max pooling layer, and lastly the dropout layer. The only difference is in the filter size of each convolution layer where in the second convolution layer to the fifth convolution layer the filter size is 256-256-128-64-32 respectively. The model used is sequential so the weight results obtained in the convolutional layer use the results of the weights from the previous convolutional layer. Next is the flatten process where all the feature maps obtained are accumulated and put together into single fully connected layers. The next layer is dense, which is a regular layer function that bridges the integrated feature maps with the output parameter values of the previous layer. Dense also adds a layer that is fully connected with units or the number of nodes, which is 32. The activation function used in dense is ReLu and then batch normalization and dropout is carried out like the previous layer. Finally, the fully connected feature maps are categorized into 9 classes according to the number of datasets that have been determined at the beginning of the activation function used, namely softmax.

2. Result and discussion

This study conducted 10 epoch model training with batch size score 15 using a dataset with a ratio of 80 % or 151 images used as training data and 20 % or 38 images were used as validation data. Further, this study has conducted trials with several epoch values, and within this paper it also showed 10 epochs with the highest accuracy value.

2.1. Blood vessel segmentation result

The examples for feature segmentation of retinal fundus image biometric features in the form of fundus images blood vessel segmentation can be seen in tab 5.

Tab. 4. Convolutional neural network proposed

Layer (Type) Input Output

Conv2d_1_ Input Layer (None, 224, 224, 3) (None, 224, 224, 3)

Conv2D 1: Conv2D (None, 224, 224, 3) (None, 222, 222, 256)

Batch_ normalization_1 (None, 222, 222, 256) (None, 222, 222, 256)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Max_pooling2D_1 (None, 222, 222, 256) (None, 111, 111, 256)

Dropout_1 (None, 111, 111, 256) (None, 111, 111, 256)

Conv2D 2: Conv2D (None, 111, 111, 256) (None, 109, 109, 256)

Batch_ normalization 2 (None, 109, 109, 256) (None, 109, 109, 256)

Max_pooling2D_2 (None, 109, 109, 256) (None, 54, 54, 256)

Dropout_2 (None, 54, 54, 256) (None, 54, 54, 256)

Conv2D 3: Conv2D (None, 54, 54, 256) (None, 52, 52, 128)

Batch_ Normalization 3 (None, 52, 52, 128) (None, 52, 52, 128)

Max_pooling2D_3 (None, 52, 52, 128) (None, 26, 26, 128)

Dropout_3 (None, 26, 26, 128) (None, 26, 26, 128)

Conv2D 4: Conv2D (None, 26, 26, 128) (None, 24, 24, 64)

Batch_ Normalization 4 (None, 24, 24, 64) (None, 24, 24, 64)

Dropout_4 (None, 24, 24, 64) (None, 24, 24, 64)

Conv2D 5: Conv2D (None, 24, 24, 64) (None, 22, 22, 32)

Batch_ Normalization 5 (None, 22, 22, 32) (None, 22, 22, 32)

Dropout_5 (None, 22, 22, 32) (None, 22, 22, 32)

Flatten_1 (None, 22, 22, 32) (None, 15488)

Dense_1 (None, 15488) (None, 32)

Batch_ Normalization_5 (None, 32) (None, 32)

Dropout_6 (None, 32) (None, 32)

Dense_2 (None, 32) (None, 10)

2.2. Convolutional neural network model training result

The formed model architecture is then brought into a compilation process with the purpose to enable the model to be used in the training process. Loss function is a function that used to measure how well the CNN model performs in identifying retinal fundus images. Details of performance during the training process can be seen in tab. 6.

Based on the results of the model training in Tab. 6, it can be seen that at the beginning of the training process the validation loss increased significantly and the validation accuracy did not show a significant increase either. However, the 4th epoch process onwards began to show

KoMmrorepHaa omma, 2021, tom 45, №6 DOI: I0.18287/2412-6179-C0-890

869

improvement in the training process until the 9th epoch process showed the validation accuracy value which touched the best number, namely 1. A slight decrease oc-

Label individu target class

Class 1

Class 2

Class 3

Class 4

Based on the results in tab. 5, it can be concluded that the model training performance is very good because it has a high degree of accuracy and a low level of losses. A good loss function is one that produces the lowest expected error. The results of the training also show that the validation set has a greater accuracy value of about 0.313 % when compared to the training set. This shows that during the training process the validation set did not experience overfitting, hence it was able to produce higher accuracy values than the results of the training set as can be seen in fig. 4.

curred in the last epoch process. From the training results, the train results and val results were carried out on the training data and validation data used during the training.

The results of the graph of the accuracy value shown in fig. 4 show that the training accuracy movement is very good with an increase in each iteration steadily, while in the validation accuracy the graph movement is quite increasing but not as good as training. Then the graph shows that the result of validation accuracy is higher than training accuracy as can be seen in the validation lost accuracy graphic fig. 5.

Based on the Validation Lost Accuracy Graphic in fig. 5, it shows that the training loss movement is very good with a steady decrease in each iteration. Meanwhile,

Tab. 5. Retinal blood vessels segmentation result

Retinal fundus

Blood vessels segmentation

Label individu target class

Rel

Class 6

Class 7

Class 8

Class 9

in the validation loss, the movement of the graph tends to fluctuate with a significant increase in the 1st and 2nd iterations. But after that the movement gradually decreased until it became almost equal to or even exceeded the training loss.

Tab. 6. Lost neural network and accuracy value result

Epoch Val accuracy Val loss Train accuracy Train loss

1 0.1569 3.3044 0.2108 2.3108

2 0.2097 8.8454 0.5185 1.3215

3 0.2189 10.4002 0.7258 0.7878

4 0.5056 1.7704 0.8332 0.5091

5 0.5789 1.7350 0.8888 0.3493

6 0.6617 0.5951 0.9261 0.2463

7 0.8733 0.4007 0.9369 0.2039

8 0.8722 0.4876 0.9463 0.1701

9 1.0000 7.2601 0.9525 0.1502

10 0.9806 0.1926 0.9583 0.1379

1.0

0.8

0.6

0.4

0.2

^ Accuracy curves ) /

___________ •.......

/

/

/ .........Training accuracy -Validation accuracy

Epochs

2 4 6 8

Fig. 4. Train accuracy graph

0 2 4 6 8 Epochs

Fig. 5. Validation lost accuracy graph

Conclusion

The design of the CNN model was successfully created with 5 layers of convolution layers, 3 layers of max pooling, 6 batch normalization layers, 6 dropout layers, 1

flatten layer, and 2 dense layers. Image input in the formation of a model measuring 224 x 224 pixels and total parameters totaling 1483594 variables. The training and validation process uses the CNN algorithm method with a ratio of 8: 2, where the total existing images are 189 with details of the distribution of the dataset as many as 80 % or 151 images as training data and 20 % or 38 images as validation data. Model training produces a test accuracy value of 98 % and from the training results the model is used to carry out the individual identification trial process. The results of the identification trial were 10 times of randomized trials and resulted in 9 correct identifications.

Further development can be done to perfect the process of biometric identification of the retinal and using GPU can speed up the training process.

References

[1] Addy D, Bala P. Physical access control based on biometrics and GSM. Int Conf on Advances in Computing, Communications and Informatics (ICACCI) 2016: 19952001. DOI: 10.1109/ICACCI.2016.7732344.

[2] Okokpujie K, Noma-Osaghae E, Okesola O, John SN, Okonigene RE. Design and implementation of a student attendance system using Iris biometric recognition. Int Conf on Computational Science and Computational Intelligence 2017: 563-567. DOI: 10.1109/CSCI.2017.96.

[3] Kalyani CH. Various biometric authentiocation techniques: a review. J Biom Biostat 2017; 8(5). DOI: 10.4172/21556180.1000371.

[4] Okokpujie K, Uduehi O, Edeko F. An enhanced biometric atm with gsm feedback mechanism. J Electr Electron Eng 2015; 12: 68-81.

[5] Kihal N, Chitroub S, Polette A, Brunette I, Meunier J. Efficient multimodal ocular biometric system for person authentication based on iris texture and corneal shape. IET Biom 2017; 6(6): 379-386. DOI: 10.1049/iet-bmt.2016.0067.

[6] Okokpujie K, Olajide F, John S, Kennedy CG. Implementation of the enhanced fingerprint authentication in the ATM system using ATmega128 with GSM feedback mechanism. Int Conf on Security and Management (SAM) 2016. Source:

(https://www.researchgate.net/profile/Kennedy-Okokpu-

jie/publication/318876644_Implementation_of_the_Enhan

ced_Fingerprint_Authentication_in_the_ATM_System_Us

ing_ATmega128_with_GSM_Feedback_Mechanism/links/

5982d260458515a60df81382/Implementation-of-the-

Enhanced-Fingerprint-Authentication-in-the-ATM-

System-Using-ATmega128-with-GSM-Feedback-

Mechanism.pdf).

[7] Unar JA, Seng WC, Abbasi A. A review of biometric technology along with trends and prospects. Patt Recogn 2017; 47(8): 2673-2688. DOI: 10.1016/j.patcog.2014.01.016.

[8] Ogbanufe O, Kim DJ. Comparing fingerprint-based biometrics authentication versus traditional authentication methods for e-payment. Decis Support Syst 2017; 106: 114. DOI: 10.1016/j.dss.2017.11.003.

[9] Mudholkar SS. Biometrics authentication technique for intrusion detection systems using fingerprint recognition. International Journal of Computer Science, Engineering and

Компьютерная оптика, 2021, том 45, №6 DOI: I0.18287/2412-6179-C0-890

871

Information Technology 2012; 2(1): 57-65. DOI: 10.5121/ijcseit.2012.2106.

[10] Wang Z, Xian J, Man F, Zhang Z. Diagnostic imaging of ophthalmology: A practical atlas. 1st ed. China Mainland: People's Military Medical Press; 2018. ISBN: 978-94-024-1058-7.

[11] Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Sala-khutdinov R. Dropout: A simple way to prevent neural networks from overfitting. J Mach Learn Res 2014; 15: 1929-1958.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

[12] Sadikoglu F, Uzelaltinbulat S. Biometric retina identification based on neural network. Procedia Comput Sci 2016; 102: 26-33. DOI: 10.1016/j.procs.2016.09.365.

[13] Khokher R, Singh RC, Jain A. Verification of biometric traits using deep learning. IJITEE 2019; 8(10): 452-459. DOI: 10.35940/ijitee.J1083.08810S19.

[14] Butt MM, Latif G, Iskandar DNFA, Alghazo J, Khan AH. Multi-channel convolutions neural network based diabetic retinopathy detection from fundus images. Procedia Comput Sci 2019; 163: 283-291. DOI: 10.1016/j.procs.2019.12.110.

[15] Yang W, Wang S, Hu J, Zheng G, Valli C. A fingerprint and finger-vein based cancelable multi-biometric system. Patt Recogn 2018; 78: 242-251. DOI 10.1016/j.patcog.2018.01.026.

[16] Soleymani S, Dabouei A, Kazemi H, Dawson J, Nasrabadi NM. Multi-level feature abstraction from convolutional neural networks for multimodal biometric identification. 24th Int Conf on Pattern Recognition 2018: 3469-3476. DOI: 10.1109/ICPR.2018.8545061.

[17] Fu H, Xu Y, Wong DWK, Liu J. Retinal vessel segmentation via deep learning network and fully-connected conditional random fields. 13th International Symposium on Biomedical Imaging 2016: 698-701. DOI: 10.1109/ISBI.2016.7493362.

[18] DRIVE: Digital Retinal Images for Vessel Extraction. Source: (https://drive.grand-challenge.org/).

[19] Susetianingtias DT, Madenda S, Fitrianingsih, Adlina D, Rodiah, Arianty R. Retinal blood vessel extraction using wavelet decomposition. Int J Adv Comput Sci Appl 2020; 11(4): 351-355. DOI: 10.14569/IJACSA.2020.0110448.

[20] Sasidharan G. Retinal based personal identification system using skeletonization and similarity transformation. IJCTT 2014; 17(3): 144-147. DOI: 10.14445/22312803/IJCTT-V17P127.

[21] Fatima J, Syed AM, Akram MU. A secure personal identification system based on human retina. IEEE Symposium on Industrial Electronics & Applications 2013: 90-95. DOI: 10.1109/ISIEA.2013.6738974.

Authors' information

Rodiah (b. 1981) currently Lecturer and Vice Head of Postgraduate Academic System Development at Gunadarma University. From 2012 until now, won 6 research grants from Indonesian Directorate General for Higher Education DIKTI (RISTEKDIKTI). Her research interests are in the areas of medical image processing especially about lung cancer and diabetic retinopathy disease, computer aided diagnosis-expert system, retinal biometric for identification and cryptography algorithm. Nowadays, she is autohor of 2 books about medical image processing for retinal fundus image. In other hand she has more than 26 publication within journals, proceeding and book chapter. She also has 8 intellectual property rights (IPR) and 2 patent. She is also developer team collaboration matrix at Indonesian Institute of Sciences. E-mail: rodiah@staff.gunadarma.ac.id.

Madenda S. (b. 1963), is a professor. Currently he is active as the Head of PhD Programs of Information Technology and a lecturer at PhD Program at Gunadarma University. His research interest is in signal, video and image processing. E-mail: sarif@staff.gunadarma. ac.id.

Susetianingtias D.T. (b. 1974) currently active as a lecturer at Gunadarma University. Research interest is in medical image processing.E-mail: diants@staff.gunadarma. ac.id.

Fitrianingsih (b. 1975), currently active as a lecturer at Gunadarma University. Research interest is in handwriting recognition. E-mail: _ fitrianingsih@staff.gunadarma.ac.id.

Adlina D. (b. 1992), is currently an active lecturer at Gunadarma University. Research interest is in Image processing and SNA. E-mail: deaalina9222@staff.gunadarma. ac.id.

Arianty R. (b. 1975), currently active as a lecturer at Gunadarma University. Research interest is database. E-mail:

rinia@staff.gunadarma.ac.id.

Received March 10, 2021. The final version - August 4, 2021.

i Надоели баннеры? Вы всегда можете отключить рекламу.