Научная статья на тему 'DEVELOPMENT OF A MOBILE APPLICATION FOR RAPID DETECTION OF MEAT FRESHNESS USING DEEP LEARNING'

DEVELOPMENT OF A MOBILE APPLICATION FOR RAPID DETECTION OF MEAT FRESHNESS USING DEEP LEARNING Текст научной статьи по специальности «Медицинские технологии»

CC BY
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
Meat quality / rapid detection / deep learning / red meat quality / image processing / flutter / Android

Аннотация научной статьи по медицинским технологиям, автор научной работы — Hasan I. Kozan, Hasan A. Akyürek

The freshness or spoilage of meat is critical in terms of meat color and quality criteria. Detecting the condition of the meat is important not only for consumers but also for the processing of the meat itself. Meat quality is influenced by various pre-slaughter factors including housing conditions, diet, age, genetic background, environmental temperature, and stress factors. Additionally, spoilage can occur due to the slaughtering process, though post-slaughter spoilage is more frequent and has a stronger correlation with postslaughter factors. The primary indicator of meat quality is the pH value, which can be high or low. Variations in pH values can lead to adverse effects in the final product such as color defects, microbial issues, short shelf life, reduced quality, and consumer complaints. Many of these characteristics are visible components of quality. This study aimed to develop a mobile application using deep learning-based image processing techniques for the rapid detection of freshness. The attributes of the source and the targeted predictions were found satisfactory, indicating that further advancements could be made in developing future versions of the application.

i Надоели баннеры? Вы всегда можете отключить рекламу.

Похожие темы научных работ по медицинским технологиям , автор научной работы — Hasan I. Kozan, Hasan A. Akyürek

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «DEVELOPMENT OF A MOBILE APPLICATION FOR RAPID DETECTION OF MEAT FRESHNESS USING DEEP LEARNING»

DOI: https://doi.org/10.21323/2414-438X-2024-9-3-249-257

Received 23.07.2024 Accepted in revised 02.09.2024 Accepted for publication 09.09.2024

Available online at https://www.meatjournal.ru/jour Original scientific article Open Access

DEVELOPMENT OF A MOBILE APPLICATION FOR RAPID DETECTION OF MEAT FRESHNESS USING DEEP LEARNING

Hasan I. Kozan1*, Hasan A. Akyurek2

1 Department of Food Processing, Meram Vocational School, Necmettin Erbakan University, Konya, Turkey 2 Department of Avionics, Faculty of Aviation and Space Sciences, Necmettin Erbakan University, Konya, Turkey

Keywords: Meat quality, rapid detection, deep learning, red meat quality, image processing, flutter, Android Abstract

The freshness or spoilage of meat is critical in terms of meat color and quality criteria. Detecting the condition of the meat is important not only for consumers but also for the processing of the meat itself. Meat quality is influenced by various pre-slaughter factors including housing conditions, diet, age, genetic background, environmental temperature, and stress factors. Additionally, spoilage can occur due to the slaughtering process, though post-slaughter spoilage is more frequent and has a stronger correlation with post-slaughter factors. The primary indicator of meat quality is the pH value, which can be high or low. Variations in pH values can lead to adverse effects in the final product such as color defects, microbial issues, short shelf life, reduced quality, and consumer complaints. Many of these characteristics are visible components of quality. This study aimed to develop a mobile application using deep learning-based image processing techniques for the rapid detection of freshness. The attributes of the source and the targeted predictions were found satisfactory, indicating that further advancements could be made in developing future versions of the application.

For citation: Kozan, H. I., Akyurek, H. A. (2024). Development of a mobile application for rapid detection of meat freshness using deep learning. Theory and Practice of Meat Processing, 9(3), 249-257. https://doi.org/10.21323/2414-438X-2024-9-3-249-257

Introduction

Red meat serves as a crucial source of animal protein for a healthy human diet [1]. In Turkey, pork is not consumed, so the need for red meat is met through butcher cattle and meat based dishes are popular in this country [2]. Despite rising meat prices due to a decreasing cattle population, beef is still one of the most important dietary sources. Also meat products such meatball [3], fermented sausage [4] or pastirma [5] are the most consumed meat products. For meat and meat products, quality is very important and changes in beef quality depend on several post-slaughter factors. Additionally, pre-slaughter stress, improper slaughtering techniques, and genetic characteristics also impact meat quality. Meat quality is closely related to alterations in the structure of meat proteins. Therefore, understanding the post-mortem changes in meat is highly significant for enhancing the quality of meat and meat products [6].

The procedures applied to butcher animals before slaughter play a critical role in determining the hygienic and technological quality of the carcass both during and after the slaughter process. Pre-slaughter conditions are established from the time cattle are loaded at the farm gate until the moment of slaughter. These conditions include factors such as breed and pre-transport feeding, transport distance and duration, loading density, holding methods in abattoir pens, holding duration, fasting duration, and posttransport mobility within the pen.

Following the slaughtering process, metabolic reactions continue in the muscles. As circulation ceases, the muscles

try to replenish the necessary energy using glycogen stored within them. If there is an insufficient amount of glycogen in the muscles or if post-mortem energy cannot be provided due to several reasons, adequate levels of lactic acid may not accumulate in the muscles. Consequently, metabolic reactions may not form as expected, and the quality of the meat may not reach the desired level [7].

Prolonged activity of animal muscles under inappropriate conditions such as stress, without adequate rest before slaughter, results in a decrease in glycogen reserves. This low level of glycogen reserve leads to the formation of only a small amount of lactic acid. Consequently, meat with a pH value of 6.0 becomes dark, firm, and dry (DFD: Dark, Firm, Dry) as reported by Young et al. [8]. In DFD meats, the high pH value increases the risk of microbial growth (making them less durable), enhances water-holding capacity, and darkens the color [9]. Conversely, Pale, Soft, Exudative (PSE) meat is often a result of stress accelerating glycolysis and causing a faster-than-normal drop in pH value [10]. Under the influence of these stress factors, the pH value can drop rapidly to 5.3 within 1-1.5 hours, and an acidic rigor-mortis develops. The drop in muscle pH and the normal temperature of the muscle at the time of slaughter cause certain proteins, especially myosin, to denature. In PSE meats, the color is pale, the texture is soft, the surface is watery, and the water-holding capacity is low [11]. These types of meats are primarily utilized in the production of cured raw meat and fermented meat products. Within the two hours after slaughter, the muscle pH typically drops to 5.6 and even

Copyright © 2024, Kozan et al. This is an open access article distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons. org/licenses/by/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material for any purpose, even commercially, provided the original work is properly cited and states its license.

to as low as 5.2 [9]. However, illumination conditions or storage also effects on surface [12], but these factor are controllable after slaughter.

Some spectroscopic methods are developed to classify these disordered meat sources. In a research, the factors involved in developing Vis/NIR spectroscopy models to differentiate between PSE (Pale, Soft, Exudative), DFD (Dark, Firm, Dry), and normal chicken breast meat were evaluated [13]. To further explore the differences between PSE, DFD, and normal meat, various studies have been conducted using different deep learning techniques [14]. It was also suggested that the deep learning approach can be a fast and innovative way for the assessment of chicken meat quality [15]. It's important to understand some definition such as machine learning, convolutional neural networks, MobileNet and transfer learning models. Machine learning is a kind of artificial intelligence (AI) and it has been focused on developing algorithms based on data patterns [16]. It also has historical data relationships for that purpose [17]. This approach is fundamentally like the way humans learn. This process increases accuracy by using data and algorithms [18]. Convolutional Neural Networks (CNNs) are a class of deep neural networks. This class is widely used in various fields such as image processing. CNNs have been used in various fields such as fruit classification [19]. MobileNet is a type of architecture designed especially for mobile and embedded vision applications [20].

MobileNetprovides sufficient performance on low-volumedevice s [21,22].in the context of imaee classification, MobileNethada good performanceespecially with lower trancing time [2ei. Transfer leneeing is s machine learning technique where amofel tralnep on one entk in rf-purpotenonasecondrelatrdfask[2ne. in thecontextol MobileNet, tranefer karning har been w^sly applied to le-veeage1lrcpre-neainedMobeleNet srchitrttufe for verioua Tasks. Li et al. [25] performed transfer learning on light-weigh° CNNe mcludmg MobildNat, for plan! diseere lraf detertiooon cell phenes, demeastiating lhe versaeility of MobileNef m ^ririrltiiralspplicatinns.

The GoogleTeachabls Madame ^atfoemalfows tiam-iog end testing marhineieagningmodrlausino thetaanefer fearnmg 2eehnlque [26.. Goodfe Teorhnbie Maehine oers a fi'e-ttgtiredMobiieNetmodelann tieansfer learning. The three basic parameters of a convolutional neural network (CNN^reep och, batch size, and learning rate [27]. An ep-o ele refera to one compfetepats weehentirenraining dales-ei. Thebdtchstee number oT tiamingexamples

utihaetl (none eterr.t^an on lolie gradie^ desceni asgorit°m. The lnarningeale isa iDO^^iei^tfri^]fa^l: ci^nCenln the magni-

Figurel. Processstagesformeatclassification in a mobile application

tude of the updates to the model's weights during training. These parameters are crucial for getting optimal results in CNN training [27].

ResNet-50, developed by Microsoft Research Asia, is a convolutional neural network (CNN) architecture that addresses the gradient vanishing problem by introducing residual connections [28]. It is a deep neural network that is part of the ResNet family, known for its ability to perform well with deeper networks due to its residual connections [29]. ResNet-50 is specifically a 50-layer deep CNN architecture [30,31]. ResNet-50 has been used in some applications such as object detection and classification.

The aim of this study is to understand the practical usability of this model with a mobile application by developing a deep learning-based model in which meat defects can be displayed live and in real time using Google Teachable Machine. Analyzes were also performed with RESNET-50 to verify the results obtained with Google Teachable Machine.

Objects and methods

Mobile application design

The process of classifying meats through a mobile application has three stages. Initially, the device's camera captures an image and transmits it to a previously trained artificial intelligence model. Next, the trained model analyzes the image and outputs a classification, indicating whether the meat is fresh or spoiled. Finally, based on these outputs, the ereshness or spoilage status of the meat is displayed on the ecreen. A flowchart detailing these stages is provided mFigure 1.

Da taset

in this study, the "Meat Quality Assessment Dataset" propered by Ulucan et al. [32] was utilized. The dataset mclud es images of meat taken every two minutes, with comurrent spoilage assessments conducted by an expert, classified according to seven parameters. These param-etnrn are date and time, ambient temperature, product tempe rature, color change, brightness, odor status, and teginn al or complete spoilage [32]. The dataset contains °4 . images of fresh meat and 948 images of spoiled meat. No eerich this dataset, various augmentation techniques were applied, including angular rotation (15°, 45°, 60°, 75°, 90°, 135°, 180°, 225°, 270°, 315°), flipping (Horizontal, Vertical), scaling down (50%), scaling up (150%), noise addition (Salt and Pepper, Gaussian), and filtering (Gaus sian, Median, Mode). Following these processes, a toid of 18,960 images per category were obtained, and screen shots of the previews are provided in Figures 2, 3, aed 4.

Fresh 07.01.2022 16:48 Fresh Data Folder

Spoiled 07.01.2022 17:00 Spoiled Data Folder

ExtendDataset.m 07.01.2022 16:53 Matlab Script Figure 2. Classification folders and model

test_20171016_1 25521D.jpg

test_20171016_1 25521 D-downsa mpled.jpg

test_20171016_1 25521 D-filter-ga ussjpg

test_20171016_1 25521 D-filter-me dian.jpg

test_20171016.1 25521 D-filter-m ode.jpg

test_20171016_1 25521 D-flip-left-right.jpg

test_20171016_1 25521 D-flip-up-down.jpg

test_20171016_1 25521 D-gaussian

•jpg

test_20171016_1 25521 D-rotate-1 5-jpg

test_20171016_1 25521 D-rotate-4 5-jpg

test_20171016_1 25521 D-rotate-6 O.jpg

test_20171016_1 25521 D-rotate-7 5-jpg

test_20171016_1 25521 D-rotate-9 O.jpg

test.20171016.1 25521 D-rotate-1 35.jpg

test_20171016_1 25521 D-rotate-1 80.jpg

test_20171016_1 25521 D-rotate-2 25.jpg

test_20171016_1 25521 D-rotate-2 70.jpg

test_20171016_1 25521 D-rotate-3

15-jpg

test_20171016_1 25521 D-salt-and -paper.jpg

test_20171016_1 25521 D-upsampl ed.jpg

test_20171016_1 25721 D-filter-m

VI

test_20171016_1 25721 D-flip-left-

test_20171016_1 25721 D-flip-up-

test_20171016_1 25721 D-gaussian

test_20171016_1 25721 D-rotate-1

test_20171016_1 25721 D-rotate-4

test_20171016_1 25721 D-rotate-6

test_20171016_1 25721 D-rotate-7

Figure 3. Images of fresh meats with processing algorithms

test_20171016_1 25721D.jpg

test_20171016_1 25721 D-downsa mpled.jpg

test_20171016.1 25721 D-filter-ga ussjpg

test_20171016_1 25721 D-filter-me dian.jpg

test_20171016_1 25721 D-rotate-9

test_20171016_1 25721 D-rotate-1

test_20171016_1 25721 D-rotate-1

test_20171016_1 25721 D-rotate-2

test_20171018_2 11321D.jpg

test_20171018_2 11321 D-downsa mpledjpg

test_20171018_2 11321 D-filter-ga ussjpg

test_20171018_2 11321 D-filter-me dian.jpg

test_20171018_2 11321 D-filter-m ode.jpg

test_20171018_2 11321D-flip-left-rightjpg

test_20171018_2 11321D-flip-up-down.jpg

test_20171018_2 11321 D-gaussian

■jpg

test_20171018_2 11321D-rotate-1 5-jpg

test_20171018.2 11321 D-rotate-4 5-jpg

test_20171018_2 11321 D-rotate-6 O.jpg

test_20171018_2 11321 D-rotate-7

5-jpg

test_20171018_2 11321 D-rotate-9 O.jpg

test_20171018_2 11321 D-rotate-1 35.jpg

test_20171018_2 11321D-rotate-1 80.jpg

test_20171018_2 11321 D-rotate-2 25.jpg

test_20171018_2 11321 D-rotate-2 70.jpg

test_20171018_2 11321 D-rotate-3 15.jpg

test_20171018_2 11321 D-salt-and -paper.jpg

test_20171018_2 11321 D-upsampl ed.jpg

test_20171018_2 11521D.jpg

test_20171018_2 11521 D-downsa mpled.jpg

test_20171018_2 11521 D-filter-ga ussjpg

test.20171018.2 11521 D-filter-me dian.jpg

test 20171018 2 test 20171018 2 test 20171018 2 test 20171018 2 test 20171018 2 test 20171018 2 test 20171018 2 test 20171018 2 test 20171018 2 test 20171018 2 test 20171018 2 test 20171018 2

Figure 4. Images of spoiled meats with processing algorithms

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Implementation of models for mobile application For integrating a machine learning model into an Android application, the machine learning model developed using Google Teachable Machine was integrated into an Android application. A quantized model was created using

the model.tflite and label.txt files.

Results and discussion

Managing Google Teachable Machine Due to the restriction by the Teachable Machine that allows a maximum of 10,000 samples per category, 10,000 random samples from each class were selected for the study. These samples were divided into training and testing datasets, with 8,500 samples (85%) used for training and 1,500 samples (15%) reserved for testing. The accuracy rates for each class after training can be seen in Table 1.

Table 1. The results of the accuracy per class

Class Accuracy #Samples

Fresh 1.00 1,500

Spoiled 1.00 1,500

The model's learning process details, such as the total number of epochs, batch size, learning rate, and dataset

design for inputs and outputs, are illustrated in Figure 5. This structured approach helps in comprehensively understanding how the model was trained and how it performs across different parameters and dataset characteristics.

Testing of models

The confusion matrix for the model is presented in Figure 6. This matrix helps in visualizing the accuracy of the model by showing the number of correct and incorrect predictions made for each class.

Figure 7 displays the loss analysis for each epoch. Thus, it is shown how the model's prediction error decreases over time as it learns from the training data.

Finally, Figure 8 presents a more detailed or different aspect than those shown in Figure 9. This is indicative of metrics such as precision, recall, or F1 scores that provide a more comprehensive assessment.

In the scenario described, the model showcases an optimal training performance, as indicated by the absence of overfitting or underfitting in the graphs. The accuracy analysis per epoch had a score of 1.00, which is indicator of its predictive capability. Furthermore, from the second epoch onwards, the training model appears to stabilize, maintaining consistent accuracy values without any fluctuations.

Confusion Matrix

Figure 5. Model parameters

Loss per epoch

Fresh

Spoiled -

1500 0

0 1500

scaleCouni ■ 1,500

Prediction Figure 6. Confusion Matrix

Figure 7. Loss per epoch

Figure 8. Accuracy per epoch

Results indicates a well-trained model. This is ideal in machine learning as it suggests that the model not only learns the patterns in the training data but also effectively applies this knowledge to unseen data, which is critical for practical applications.

System assessment

During the evaluation phase of this system, the black box testing method is used. This approach evaluates the system functionalities without knowing the internal workings of the application, which mimics how end-users would interact with the system.

Figures 9, 10, and 11 show the test images (fresh and spoiled) processed in Google's Teachable Machine.

Figure 9 presents the result of the test classification on an image of fresh meat. This would demonstrate how accurately the system can identify meat that is still fresh and safe for consumption based on its trained model.

Figure 10 displays the results of test classifications on images of spoiled meat. This is crucial for determining the system's effectiveness in correctly identifying meat that is no longer suitable for consumption, thus avoiding potential health risks.

Figure 11 contains screenshots from the mobile application developed for this testing. These images provide insight into how the application presents the classification results to the user, offering a user-friendly interface that displays whether the meat is fresh or spoiled based on the image captured by the camera sensor on an Android device.

Such evaluations and visual presentations in the testing phase are essential for validating the reliability and usability of the machine learning model and the overall system. They help to ensure that the application performs well in practical scenarios, which is key to user satisfaction and safety.

ResNet-50's unique architecture, which includes residual connections and a bottleneck structure, enables it

to effectively address the challenges associated with training deep neural networks. Resnet-50 was used to validate Google Teachable Machine results. The training progress of Resnet-50 is given in Figure 12 and the results of Resnet-50 are given in Table 2.

Table 2. Validation accuracy of the proposed model

Parameter Validation accuracy Training finished Training elapsed time Training epoch cycle Iteration

Iteration per epoch Max Iteration Validation Frequency Hardware resource Learning rate schedule Learning rate

Result 99.39%

Max epochs completed

236 min 34 sec

10 of 10

9480 of 9480

948

9480

30 iterations Single GPU Constant 0.0001

Precision-Recall curve and ROC curve of ResNet-50 are also given in Figure 13 below.

Confusion matrix of ResNet-50 is given in Figure 14.

The results obtained from Google Teachable Machine and RESNET-50 are presented in Table 3, showcasing the precision, recall, and F1-score for the classes "Fresh" and "Spoiled".

Table 3. Precision, recall and F1-score results of the classes (Fresh and Spoiled)

Class Precision Recall F1-Score

Fresh 0,9952 0,9926 0,9939

Spoiled 0,9926 0,9952 0,9939

The analysis demonstrates that models exhibit exceptionally high-performance metrics across both classes. The precision and recall values for both "Fresh" and "Spoiled"

Figure 9. The result of the test classification on an image of fresh meat

Figure 10.

The results of test classifications on images of spoiled meat

Figure 11. Screenshots from the mobile application developed for this testing

Training Progreis (28^un.202410:52:40)

" '1 .....I ■ » 1 | - r IF 1 ■ . rrin iT'f >T: Trr "

1"

Epoch 1 Epoch 2 Ejj3Ch3 Epoph4 Ecoct5 Epoch p Epoch 7i Epodi8 | Epoch 9 I Epoch 10 |

0 icw 30)0 1U» wai MM t<?r»lcn no» ko: iu:o «a»

-

Ill, : Ii ; 1

JLtAlJl^L. il .1 II. ii. .. ■ -III . . . .. j! i •____„_» j j 1___________________

1CC0 KCO 3M0 " «K0 U :00° wx rm e:« &:cc

OtfltT InlOOTOtW

- - VjWrtcn

Figure 12. The Training

categories are consistently above 0.99, indicating a highly accurate classification capability. Specifically, the precision of the "Fresh" class is 0.9952, with a recall of 0.9926, resulting in an F1-score of 0.9939. Conversely, the "Spoiled" class shows a precision of 0.9926, a recall of 0.9952, and an F1-score of 0.9939. These metrics reflect a balanced and robust classification performance, demonstrating the effectiveness of the deep learning models in distinguishing between fresh and spoiled meat with minimal misclassification.

progress of Resnet-50

Throughout the development process, Flutter was used to build the application, allowing for testing across different cameras and environments. This approach demonstrated the flexibility and adaptability of the application under various conditions.

The research confirmed that Android smartphone devices can effectively employ a trained dataset and model tailored for specific detection goals. By using Google's Teachable Machine, a machine learning model could be

(a)

(b)

Figure 13. (a) Precision-Recall Curve, (b) ROC Curve

Predicted Class

Figure 14. Confusion Matrix of ResNet-50

implemented swiftly and with minimal resource expenditures. This capability is crucial for quickly reaching targeted outcomes and addressing significant issues in the industry through further advanced research.

Conclusion

In this study, an artificial intelligence application was developed using deep learning to determine whether beef was fresh or spoiled. The results were highly satisfactory,

showing extremely high accuracy, sensitivity, and precision rates, with the model created using Google's Teachable Machine achieving up to 100% in these metrics. However, it was noted that the accuracy could decrease depending on the level of lighting in the detection area. This study illustrates the potential of integrating accessible AI technologies like Google's Teachable Machine into mobile applications, providing powerful tools for industry applications where quick and accurate classification of

product quality can significantly impact health and business outcomes.

The deep learning-based mobile application for quick meat freshness detection offers several advantages, such as better meat quality evaluation and increased consumer safety. By incorporating MLOps practices, the model could be able to learn from new images continuously, which

would increase accuracy and decrease bias especially in difficult lighting conditions and increase its effectiveness even further. Adding a variety of datasets will also guarantee strong performance in a range of situations. Maintaining high accuracy and adaptability through regular model updates and retraining will eventually improve user experiences and application trust.

REFERENCES

1. Oyan, O., §enyuz, H. H., Arkose, C. Q. (2024). Comparison of carcass weight and carcass characteristics in some cattle breeds. Research and Practice in Veterinary and Animal Science (REPVAS), 1(1), 1-8. http://doi.org/10.69990/rep-vas.2024.1.1.1

2. Kushniruk, H., Rutynskyi, M. (2022). Development of the infrastructure of Turkish restaurants in the tourist center of Eastern Europe: The case of Kyiv. GastroMedia Journal, 1(1), 1-18.

3. Erdem, N., Babaoglu, A. S., Po^an, H. B., Karakaya, M. (2020). The effect of transglutaminase on some quality properties of beef, chicken, and turkey meatballs. Journal of Food Processing and Preservation, 44(10), Article e14815. https://doi. org/10.1111/jfpp.14815

4. Kozan, H.I., Sari^oban, C. (2023). Effect of oat bran addition on the survival of selected probiotic strains in Turkish fermented sausage during cold storage. Food Bioscience, 54, Article 102820. https://doi.org/10.1016/j-.fbio.2023.102820

5. Aksu, M.I., Konar, N., Turan, E., Tamturk, F., Serpen, A. (2024). Properties of encapsulated raspberry powder and its efficacy for improving the color stability and amino acid composition of pastirma cemen pastes with different pH during long term cold-storage. Journal of Food Science and Technology. https://doi.org/10.1007/s13197-024-06029-6

6. Kemp, C. M., Parr, T. (2012). Advances in apoptotic mediated proteolysis in meat tenderisation. Meat Science, 92(3), 252259. https://doi.org/10.1016/j-.meatsci.2012.03.013

7. Thompson, J.M., Perry, D., Daly, B., Gardner, G. E., Johnston, D. J., Pethick, D. W. (2006). Genetic and environmental effects on the muscle structure response post-mortem. Meat Science, 2006, 74(1), 59-65. https://doi.org/10.1016/j-.meat-sci.2006.04.022

8. Young, O., West, J., Hart, A. L., van Otterdijk, F. F. H. (2004). A method for early determination of meat ultimate pH. Meat Science, 66(2), 493-498. https://doi.org/10.1016/s0309-1740(03)00140-2

9. Oztan, A. (2005). Meat science and technology. TMMOB Gida Muhendisleri Odasi, Ankara (In Turkish)

10. Tornberg, E. (1996). Biophysical aspects of meat tenderness. Meat Science, 43, 175-191. https://doi.org/10.1016/0309-1740(96)00064-2

11. Woelfel, R. L., Owens, C. M., Hirschler, E. M., Martinez-Dawson, R., Sams, A. R. (2002). The characterization and incidence of pale, soft, and exudative broiler meat in a commercial processing plant. Poultry Science, 81(4), 579-584. https://doi.org/10.1093/ps/81.4.579

12. Kozan, H.I., Sari^oban, C. (2016). Effects of light sources on physicochemical/color properties and oxidative/microbio-logical stability of ground beef during storage at 4 °C. Fleischwirtschaft International, 4, 63-68.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

13. Jiang, H., Yoon, S.-C., Zhuang, H., Wang, W., Yang, Y. (2017). Evaluation of factors in development of Vis/NIR spec-troscopy models for discriminating PSE, DFD and normal broiler breast meat. British Poultry Science, 58(6), 673-680. https://doi.org/10.1080/00071668.2017.1364350

14. Jaddoa, M.A., Zaidan, A. A., Gonzalez, L. A., Deveci, M., Cuthbertson, H., Al-Jumaily, A. et al. (2024). An approach-based machine learning and automated thermal images to predict the dark-cutting incidence in cattle management of healthcare supply chain. Engineering Applications of Artificial Intelligence, 135, Article 108804. https://doi.org/10.1016/j-.en-gappai.2024.108804

15. Yang, Y., Wang, W., Zhuang, H., Yoon, S., Bowker, B., Jiang, H. (July 7-10, 2019). Prediction of quality attributes of chicken breast fillets by using hyperspectral imaging technique combined with deep learning algorithm. 2019 ASABE Annual International Meeting. American Society of Agricultural and Biological Engineers. Boston, Massachusetts, 2019.

16. Ozel, M.A., Gül, M.Y., Güne§ E. (2023). A machine learning approach to detecting meal contents in gastronomy with the YOLO algorithm. NeuGastro Journal, 2(1), 31-38. (In Turkish)

17. Wang, L., Alexander, C. A. (2016). Machine learning in big data. International Journal of Mathematical, Engineering and Management Sciences, 1(2), 52-61. https://doi.org/10.33889/ ijmems.2016.1.2-006

18. Mahmoud, H. (2023). Comparison between machine learning algorithms for cardiovascular disease prediction. Scientific Journal for Financial and Commercial Studies and Research, 4(1), 909-926. https://doi.org/10.21608/ cfdj.2023.258074 (In Arabic)

19. Vasumathi, M., Kamarasan, M. (2021). An LSTM based CNN model for pomegranate fruit classification with weight optimization using dragonfly technique. Indian Journal of Computer Science and Engineering, 12(2), 371-384.

20. Kulkarni, U., S.M., M., Gurlahosur, S. V., Bhogar, G. (2021). Quantization friendly mobilenet (qf-mobilenet) architecture for vision based applications on embedded platforms. Neural Networks, 136, 28-39. https://doi.org/10.1016/j-.neu-net.2020.12.022

21. Winoto, A. S., Kristianus, M., Premachandra, C. (2020). Small and slim deep convolutional neural network for mobile device. IEEE Access, 8, 125210-125222. http://doi.org/10.1109/ ACCESS.2020.3005161

22. Sundara Sobitha Raj, A. P., Vajravelu, S. K. (2019). DDLA: Dual deep learning architecture for classification of plant species. IET Image Processing, 13(12), 2176-2182. https://doi. org/10.1049/iet-ipr.2019.0346

23. Sharma, A., Singh, A., Choudhury, T., Sarkaret T. (2021). Image classification using ImageNet Classifiers in environments with limited data. PREPRINT (Version 1) available at Research Square. https://doi.org/10.21203/rs3.rs-428416/v1

24. Hilmizen, N., Bustamam, A. Sarwinda, D. (10-11 December, 2020). The multimodal deep learning for diagnosing COVID-19 pneumonia from chest CT-scan and X-ray images. 2020 3rd international seminar on research of information technology and intelligent systems (ISRITI). IEEE. 2020. http://doi. org/10.1109/ISRITI51436.2020.9315478

25. Li, Y., Xue, J., Wang, K., Zhang, M., Li, Z. (2022). Surface defect detection of fresh-cut cauliflowers based on convolution-

al neural network with transfer learning. Foods, 11(18), Article 2915. https://doi.org/10.3390/foodsm82915

26. Jordan, B., Devasia, N., Hong, J., Williams, R., Breazeal, C. (2-9 February, 2021). PoseBlocks: A toolkit for creating (and dancing) with AI. Proceedings of the AAAI Conference on Artificial Intelligence. Vancouver, Canada, 2021. https://doi. org/10.1609/aaai.v35i17.17831

27. Warden, P., Situnayake, D. (2019). Tinyml: Machine learning with tensorflow lite on arduino and ultra-low-power microcontrollers. O'Reilly Media, 2019.

28. Suherman, E., Rahman, B., Hindarto, D., Santoso, H. (2023). Implementation of ResNet-50 on end-to-end object detection (DETR) on objects. SinkrOn, 8(2), 1085-1096. http://doi. org/10.33395/sinkron.v8i2.12378

29. Lee, J., Kim, T., Beak, S., Moon, Y., Jeong, J. (2023). Realtime pose estimation based on ResNet-50 for rapid safety prevention and accident detection for field workers. Elec-

tronics, 12(16), Article 3513. https://doi.org/10.3390/elec-tronics12163513

30. Tao, X., Gandomkar, Z., Li, T., Yi, J., Brennan, P.C., Reed, W.M. (29 March, 2024). CNN-based transfer learning with 10-fold cross-validation: A novel approach for customized education of mammography training. Medical Imaging 2024: Image Perception, Observer Performance, and Technology Assessment. SPIE, 2024. https://doi.org/10.1117/12.3006659

31. Rachmad, A., Syarief, M., Hutagalung, J., Hernawati, S., Ro-chman, E. M. S., Asmara, Y. P. (2024). Comparison of CNN architectures for mycobacterium tuberculosis classification in sputum images. Ingénierie Des Systèmes d Information, 29(1), 49-56. https://doi.org/10.18280/isi.290106

32. Ulucan, O., Karakaya, D., Turkan, M. (2019). Meat quality assessment based on deep learning. 2019 Innovations in Intelligent Systems and Applications Conference (ASYU). Izmir, Turkey, 2019. https://doi.org/10.1109/ASYU48272.2019.8946388

AUTHORS INFORMATION

Hasan I. Kozan, Dr., Head of Food Processing Department, Meram Vocational School, Necmettin Erbakan University. Konya, 42130, Turkey. Tel.: +90-546-223-05-46, E-mail: h.ibrahimkozan@gmail.com ORCID: https://orcid.org/0000-0002-2453-1645 * corresponding author

Hasan A. Akyurek, Dr., Assistant Professor, Department of Avionics, Faculty of Aviation and Space Sciences, University of

Necmettin Erbakan. Konya, 42130, Turkey. Tel.: +90-507-368-99-48, E-mail: hsnakyurek@gmail.com

ORCID: https://orcid.org/0000-0002-0520-9888

All authors bear responsibility for the work and presented data.

Hasan I. Kozan: Conceptualization (lead); data curation (equal); formal analysis (equal); funding acquisition (lead); investigation (lead); methodology (equal); project administration (lead); resources (lead); software (equal); supervision (equal); validation (equal); visualization (equal); writing — original draft (equal); writing — review and editing (equal).

Hasan A. Akyurek: Data curation (equal); formal analysis (equal); methodology (equal); software (equal); supervision (equal); validation (equal); visualization (equal); writing — original draft (equal); writing — review and editing (equal).

The authors were equally involved in writing the manuscript and bear the equal responsibility for plagiarism.

The authors declare no conflict of interest.

i Надоели баннеры? Вы всегда можете отключить рекламу.