Научная статья на тему 'NEURAL NETWORKS AND ALGORITHMS FOR DETECTING REAL-TIME OBJECTS IN PRACTICE'

NEURAL NETWORKS AND ALGORITHMS FOR DETECTING REAL-TIME OBJECTS IN PRACTICE Текст научной статьи по специальности «Техника и технологии»

CC BY
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
YOLO / Object Detection / Neural Networks / Parking Space Counter / Real-Time Data Processing / Modern Device Optimization / Smart Cities

Аннотация научной статьи по технике и технологии, автор научной работы — Yklassova Symbat Yklasskyzy, Kenzhebayeva Zh.Y.

In this paper, we present a detailed study of a real-time parking space detection system using neural networks. We address the problem of optimizing urban mobility through smart infrastructure, focusing on real-time object detection. The study emphasizes the importance of automating tasks requiring speed and accuracy, such as traffic and parking management. The purpose of the research is to develop an automated parking meter based on object detection algorithms, particularly the YOLO algorithm. Relevance of the study lies in the advanced object detection algorithms like YOLO, crucial for applications such as smart cities and surveillance systems. By comparing various methods like RCNN and YOLO, this research offers insights into balancing accuracy, speed, and computational efficiency.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «NEURAL NETWORKS AND ALGORITHMS FOR DETECTING REAL-TIME OBJECTS IN PRACTICE»

УДК 004.83

NEURAL NETWORKS AND ALGORITHMS FOR DETECTING REAL-TIME

OBJECTS IN PRACTICE

YKLASSOVA SYMBAT YKLASSKYZY

Undergraduate student of the Caspian University of Technology and Engineering named after Sh.Yessenov Scientific supervisor - Kenzhebayeva Zh.Y. Aktau city, Kazakhstan

Asbstract: In this paper, we present a detailed study of a real-time parking space detection system using neural networks. We address the problem of optimizing urban mobility through smart infrastructure, focusing on real-time object detection. The study emphasizes the importance of automating tasks requiring speed and accuracy, such as traffic and parking management. The purpose of the research is to develop an automated parking meter based on object detection algorithms, particularly the YOLO algorithm.

Relevance of the study lies in the advanced object detection algorithms like YOLO, crucial for applications such as smart cities and surveillance systems. By comparing various methods like R-CNN and YOLO, this research offers insights into balancing accuracy, speed, and computational efficiency.

Keywords: YOLO,Object Detection, Neural Networks, Parking Space Counter, Real-Time Data Processing, Modern Device Optimization, Smart Cities

Аннотация: В этой статье мы представляем подробное исследование системы обнаружения парковочных мест в режиме реального времени с использованием нейронных сетей. Мы рассматриваем проблему оптимизации городской мобильности с помощью интеллектуальной инфраструктуры, уделяя особое внимание обнаружению объектов в режиме реального времени. В исследовании подчеркивается важность автоматизации задач, требующих скорости и точности, таких как управление дорожным движением и парковкой. Целью исследования является разработка автоматизированного парковочного счетчика на основе алгоритмов обнаружения объектов, в частности алгоритма YOLO.

Актуальность исследования заключается в использовании передовых алгоритмов обнаружения объектов, таких как YOLO, которые имеют решающее значение для таких приложений, как "умные города" и системы видеонаблюдения. Сравнивая различные методы, такие как R-CNN и YOLO, это исследование дает представление о балансе точности, скорости и вычислительной эффективности.

Ключевые слова: YOLO, Обнаружение объектов, Нейронные сети, Счетчик парковочных мест, Обработка данных в реальном времени, Оптимизация современных устройств, Умные города

Аннотация: Бул жумыста б1з нейрондыц желшерд1 цолдана отырып, нацты уацыт режимтде турац орындарын аныцтау ЖYйесiн егжей-тегжейл1 зерттеуд1 усынамыз. Б1з нацты уацыт режимтде объектiлердi аныцтауга баса назар аудара отырып, ацылды инфрацурылым арцылы цалалыц утцырлыцты оцтайландыру мэселест шешем1з. Зерттеу цозгалыс пен автотурацты басцару сияцты жылдамдыц пен дэлдiктi цажет ететт тапсырмаларды автоматтандырудыц мацыздылыгын кврсетедi. Зерттеудщ мацсаты-объектiлердi аныцтау алгоритмдерiне, атап айтцанда YOLO алгоритмiне негiзделген автоматтандырылган турац есептегшт жасау.

Зерттеудщ взектшт "ацылды цалалар" жэне бацылау ЖYйелерi сияцты цолданбалар Yшiн вте мацызды YOLO сияцты жетiлдiрiлген нысандарды аныцтау алгоритмдертде

жатыр. R-CNN жэне YOLO сияцты dpmypni эдгстердг салыстыра отырып, бул зерттеу дэлд1кт1, жылдамдыцты жэне есептеу тшмдштн тецестiру туралы тустж бередi.

Tyrnndi свздер: YOLO, Объекmiлердi Аныцтау, Нейрондыц Желшер, Автотурац Еcепmегiшi, Нацты Уацыттагы Дерекmердi Оцдеу, Заманауи Курылгыларды Оцтайландыру, Ацылды Калалар

Object recognition is a key technology in the field of computer vision that allows machines to identify and classify objects in images or videos. The ability to perceive the visual world in the same way as a person's vision is of great importance for various industries, including health care, autonomous vehicles, video surveillance and intelligent infrastructure. The history of object recognition can be traced back to the first attempts at pattern recognition, when basic algorithms were used to identify simple shapes. However, these methods were complex and limited in the possibilities of processing various visual data. The emergence of machine learning and, more recently, Deep Learning has marked a significant turning point in this area. Neural networks based on the structure and functions of the human brain have proven to be particularly effective in learning and recognizing patterns from large amounts of data.

Neural networks, the basic concept of artificial intelligence, have revolutionized various areas of computer vision, especially object recognition. These networks are based on the structure of the human brain and consist of interconnected nodes or "neurons" that simulate how the human brain processes information. A neural network architecture usually includes an input level, several hidden levels, and an output level. Each level transforms inputs into a more abstract representation, allowing the network to explore complex patterns and relationships within the data.

The concept of neural networks originated in the 1940s, when McCulloch-Pitts ' first artificial neuron model was introduced. However, it was not until the 1980s and 1990s that neural networks became popular, mainly due to advances in computing power and the development of a reverse propagation algorithm by Rumelhart, Hinton and Williams in 1986. Reverse propagation allowed neural networks to iteratively adjust their weights to reduce the error in their predictions, making them more effective in data-based learning[1].

An important milestone in the evolution of neural networks was the development of convolutional neural networks (CNNS) by Jan LeCun and his colleagues in the late 1980s and early 1990s. Introduced in 1998, LeCun's LeNet-5 system was one of the first CNN systems to be successfully used for number recognition, an important step in solving complex image processing problems. CNN's are well suited for graphic data processing due to their ability to automatically and adaptively explore the spatial hierarchies of objects, such as borders, textures, and shapes. CNN consists of several types of layers, including convolutional layers that combine layers, and fully connected layers. Convolutional layers apply filters to the input image to identify local patterns, combined layers reduce image size while retaining important characteristics, and fully linked layers combine these characteristics to obtain a final prediction. This architecture has become the basis of modern algorithms for recognizing objects[2].

The development of object recognition algorithms has been marked by continuous innovations due to the increasing complexity of visual data and the increasing demand for more accurate and efficient models. The first attempts to recognize objects in the 1960s and 1970s were based on manual methods of identifying signs, when engineers developed algorithms for identifying simple shapes and patterns in images. However, these approaches were limited by the ability to process the diversity and complexity of visual data in the real world[3]. In the 1990s, object recognition methods based on machine learning appeared, which made it possible to automatically extract symbols from data. The methods of support vectors (SVM) and k-near neighbors (k-NN) were one of the popular algorithms of that era. Although these methods represented a significant improvement in comparison to manual object extraction, they still required significant pretreatment and were not very suitable for processing complex graphical data. A real breakthrough in object recognition came in the 2000s with the advent

ОФ "Международный научно-исследовательский центр "Endless Light in Science"

of deep learning, especially thanks to the success of CNNs. The turning point in the study of in-depth computer vision was the Imagenet Large Scale Visual Recognition Challenge (ILSVRC) in 2012, where AlexNet, a deep pattern recognition system developed by Alex Krizhevsky, Ilya Sutskever and Jeffrey Hinton, significantly surpassed all other models. The AlexNet architecture, which includes many convolu levels, ReLU activation, and drop-down levels, has set a new standard for object recognition and demonstrated deep learning potential in this area. After the success of Alexnet, many object recognition algorithms have been developed, each of which expands the boundaries of what is possible in terms of speed, accuracy, and efficiency. These include the group of regional convolutional neural networks (R-CNN), the single Shot multibox detector (SSD), and (YOLO). Each of these algorithms has new technologies to improve the efficiency of object recognition, which allows systems to be used in real-time applications[4].

With the development of object recognition technology, several algorithms have become leaders in this area, each of which has its own methodology and field of application. The following is a detailed description of some of these basic algorithms:

- R-CNN (regions with convolutional neural networks) : developed by Ross Girschik in 2014, R-CNN was the main algorithm that combined regional recommendations with CNN to identify objects in images. The R-CNN approach involves creating approximately 2,000 regional recommendations for each image, and then running CNN for each region to classify the object. Despite the high accuracy, R — CNN's computational costs and low processing speed—due to the need to run CNN for each regional proposal-limited its practical application.

- Fast R-CNN : The Fast R-CNN program introduced by Girshick in 2015 eliminated the shortcomings of R-CNN by simplifying the detection process. In Fast R-CNN, the entire image is transmitted by CNN to create an object Map, and then the region recommendations are applied to this object Map. The layer that combines the area of interest (RoI) produces fixed-sized object vectors for each region proposal, which are then classified. This approach significantly reduces calculation costs and increases speed while maintaining high accuracy.

- Faster R-CNN: in 2015, the Faster R-CNN program, presented by Shaoqing Ren, Kaimin Hye, Ross Girshik and Jiang Sun, further optimized the target detection process by implementing the regional recommendation network (RPN). RPN shares convolutional levels with the object detection network, which allows it to generate region-wide recommendations faster than previous methods. This discovery made Faster R-CNN one of the most accurate and effective object detection algorithms at the time.

- SSD (Single Shot MultiBox Detector) : introduced in 2016 by Wei Liu and its authors, the SSD takes a different approach, completely eliminating the need for region offerings. Instead, the SSD directly predicts boundaries and class estimates for multiple objects of different scales based on object maps created by CNN. The SSD architecture is completely convolutional, which allows fast detection while maintaining high accuracy. This made the SSD ideal for real-time object detection tasks.

- YOLO: created in 2016 by Joseph Redmon and his co-authors, YOLO revolutionized the discovery of objects, presenting it as a single regression problem. YOLO divides the input image into a grid, each cell of which predicts the boundaries and probabilities of classes in a single pass through the network. This approach allows YOLO to process images very quickly, making it one of the fastest object detection algorithms, albeit with lower accuracy compared to regional methods such as the Faster R-CNN[5].

You only Look once (YOLO) is a modern real - time object detection algorithm proposed in 2015 by Joseph Redmon, Santosh Divvala, Ross Girshik and Ali Farhadi in their famous research work "you only Look once: unified detection of real-time objects". The authors formulate the problem of object detection as a regression problem, and not a classification problem, separating the limiting

circles in space and correlating probabilities with each of the images identified by a single convolutional neural network (CNN) [6].

YOLO is a popular algorithm for detecting real-time objects. YOLO once combined a multistage process and uses a single neural network to classify and predict the boundaries of defined objects. Thus, it is significantly optimized to increase detection performance and can work much faster than two separate neural networks to identify and classify objects individually. This is accomplished by changing the purpose of the traditional image classifiers used for the regression task to determine the constraint framework for objects. In this article, we will consider only yolov1, the first of many iterations that this architecture has passed. Although there were many improvements in later iterations, the basic idea of the architecture remained the same. Simply called YOLO, YOLOv1 can perform object detection at 45fps faster than real-time, making it a great choice for applications that need real-time detection. It looks at the entire image at once and only once — hence the name "you only look once"-allowing it to capture the context of defined objects. This halves the number of false positives compared to R-CNN, which deals with different parts of the picture individually. In addition, YOLO can generalize the views of different objects, which allows it to be applied to different new environments[7].

The architecture of the YOLO model consists of three main components: head, neck and spine. The spine is part of a network consisting of convolutional layers designed to identify and process the main features of the image. The basis is first taught in a classification data set and is usually taught at a lower resolution than the final definition model, as detection requires smaller details than classification^].

Neck uses objects of convolution layers in a trunk with fully connected layers to make predictions about the probabilities and coordinates of the limiting rectangle. The head layer is the last output layer of the network, which can be replaced with other layers with the same input form to teach transportation. As discussed earlier, the head S x S x (C + B 5 5) is a tensor and in YOLO's original research work has a size of 7 x 7 x 30, a split s size of 7, A Class of 20 C, and 2 predicted limiting circles. These three parts of the model work together first extract the main visual elements from the picture, then classify and tie them.

The article titled" Evolution of the YOLO family of neural networks: from Variant 1 to variant 7 " provides a comprehensive overview of the development and improvement of the YOLO object detection framework. In the table below (table 1) analysis of the research work and the stages of evolution of yolo:

Table 1.

The evolution of YOLO from V1 to V7.

Version Key Features Advancements Applications

YOLOvl Unified detection, single convolutional network First significant speedup in object detection Real-time detection, basic object tasks

YOLOv2 Introduced multi-scale detection, anchor boxes, YOLO9000 Increased number of detectable objects, improved accuracy Enhanced object detection tasks

YOLOv3 Deeper network, better feature extraction Improved performance in small objects and complex scenes Autonomous driving, security

Version Key Features Advancements Applications

YOLOv4 CSPNet, PANet, advanced data augmentation Achieved state-of-the-art performance Industrial automation, complex detection

YOLOv5 Enhanced training, inference efficiency, developed by a different team Popular for practical applications due to ease of use and speed Widely adopted in real-time systems

YOLOv6 Focused on reducing complexity while maintaining performance Optimized for real-time applications Autonomous systems, edge devices

YOLOv7 Self-distillation, further architectural improvements Increased accuracy and efficiency, pushing the boundaries of real-time detection Latest in cutting-edge applications

The article provides a detailed overview of how Yolo has evolved over time, highlighting the main improvements and innovations in each version. It shows the progress of the platform from its initial release to the latest version of YOLOv7, showing how these continuous improvements have increased its performance, performance, and usability in real-time object detection scenarios[9].

Comparison with other algorithms: Anand John, Divyakant Mawan's research work "a comparative study of various object detection algorithms and performance analysis" deals with the evolution of recognition algorithms. We will focus on the ereshkels in labor :

1.R-CNN (2014) : relies on regional recommendations, but suffers from slow detection and high computational costs.

2.Faster R-CNN (2015) : increases efficiency by reducing object Map sharing and processing

time.

3.Faster R-CNN (2015) : introduces regional recommendation networks (RPN) to further increase the speed.

4.YOLO (2016) : an innovative approach is to process the entire image as a single task, providing real-time performance with some compasses related to precision.

Each algorithm provides a different balance between detection speed, accuracy, and calculation requirements. R-CNN-based approaches are characterized by high accuracy, but can be slower, while YOLO allows faster detection in real time. Fast R-CNN increases efficiency and performance compared to R-CNN, but there may still be problems with real-time applications compared to YOLO. The article provides a comprehensive comparison showing the benefits of different uses, such as realtime applications (in favor of YOLO) compared to more specific but slower systems (for R-CNN and its variants) [10].

The use of YOLO in the parking meter project. In the context of the parking meter project, YOLO's ability to quickly and accurately identify objects is an important advantage. The project provides for the identification and classification of parking spaces as empty or empty, based on data from cameras installed in the parking lot. The high speed of YOLO data processing ensures that the system can provide users with updates in real time, showing free parking spaces without significant delays.

Implementation involves training the YOLO model based on a data set of parking images marked with the location of parking spaces and their working status. At the output, the model

processes each frame of the camera, determines the parking spaces and classifies them. The results then show which locations are present in the user interface (for example, "empty: A5, C6"). One of the tasks in this project is to accurately determine parking spaces in different lighting conditions, at different angles and in situations where other vehicles or objects cover them. The reliability of YOLO and its ability to generalize in various situations make it suitable for solving this problem. In addition, it is possible to further improve the accuracy of the system by fine-tuning the model and using data augmentation techniques, which will allow reliable operation in the real world.

The main performance indicators of object detection models are average accuracy (mAP), merge intersection (IoU), and output time. These indicators will help you determine how well YOLOv8 detects and classifies parking spaces in real time.

1.Average accuracy (mAP): this indicator assesses the accuracy and responsiveness of the model within different reliability limits. In my tests, YOLOv8 received a high mAP score, which indicates its ability to accurately determine parking spaces in different situations. The mAP value of the model remained above 0.90, indicating that the system can reliably distinguish between empty and empty spaces in many scenarios.

2.Cross-communication ( debt receipt): the debt receipt measures the overlap between the estimated limit circle and the actual object. When evaluating, I tried to ensure that the amount of the debt receipt is at least 0.5, which is the industry standard for identifying objects. YOLOv8 was constantly reaching this threshold, especially when well-defined parking lots were found. However, in cases where parking lots are busy or vehicles are parked at unusual corners, the number of debt receipts has decreased slightly.

3.Ejection time: the ejection time is an important factor for real-time parking detection. In my deployment tests, YOLOv8 showed good results in processing each video frame in 15-30 milliseconds on a standard GPU. This makes it suitable for real-time applications where low latency is important. In peripheral devices, optimizations such as model quantization further reduce the output time, which allows continuous operation without significant delays[11].

Yolo in different environments. One of the main tasks in the implementation of any object detection model is to ensure its application to new conditions. In parking lots, environmental factors such as weather, lighting, and camera viewing angles can significantly affect detection efficiency. The YOLOv8 design, which includes reliable methods for reproducing data during reading, helps to improve its generalization capabilities.

1.Light conditions: as part of the evaluation, I tested the system in a variety of light scenarios, including bright sunlight, cloudy weather and nighttime with artificial light. The model maintained a high accuracy of detection during daylight hours, but faced some difficulties in conditions of insufficient lighting. Fine tuning of the model based on additional night data has improved its performance in low light conditions.

2.Different shooting angles: parking lots often use cameras with different locations, from top to beveled. During testing, I noticed that the YOLOv8 works well with different angles, especially when shooting from above, where the parking lots are clearly visible. To achieve more complex results, I experimented with fine-tuning a data-based model from different points of view, which helped improve detection efficiency.Studying neural networks and object recognition algorithms, I focused my project on the use of YOLO to determine the parking location in real time. After carefully analyzing various algorithms, YOLO stood out due to the combination of speed and accuracy. I created a parking meter system using YOLO by writing code in Python in the Visual Studio code environment and implemented a real-time detection camera.

The system works by displaying all parking spaces in real time, showing which are free and which are free. This will allow security personnel to quickly direct drivers to empty spaces. The main

goal is to optimize the parking process, save customers ' time and eliminate the need to look for free places in their absence. In the figure below (Figure 1.), we can see the software design stage.

Figure 1. Initial design stage

Let's do a technical review. The project involves installing a camera to capture video from a parking lot in real time, which is then processed by the YOLO algorithm. Using convulsive neural networks (CNNS), the system detects objects (cars) in certain parking spaces[12].

Yolo's one-time online detection feature provides fast processing, making it ideal for real-time applications. The system determines whether parking spaces are empty or not, and visually presents information to the security service. One of the main tasks of the project was to optimize the model for a specific task of identifying parked cars, since YOLO is primarily intended for general detection of objects. To solve this problem, I adjusted the model parameters and training data, paying special attention to parking lots. This required the collection of various data sets on parking images with different lighting conditions and angles to increase the reliability of the model.

If we want to estimate the percentage of busy parking spaces, then the formula is as follows:

/> = • 100%

(1)

P0- percentage ofbusy parking lots n0-number ofbusy parking lots nt- number of all parking lots

To ensure the correct coverage of the entire parking area, you can use the following camera field of view (FOV) formula:

FOV = 2x arctan

(2)

FOV- Field of view angle d- Invisible area width f - Camera focal length

Currently, the system successfully tracks all parking spaces in real time, updating the status of each place when cars arrive or leave. For example, if a driver enters a parking lot and takes a parking space, the system will immediately update the visual display, indicating that the space is currently busy. Security officers no longer have to manually monitor the parking lot; instead, they may simply stare at the screen to inform drivers of vacancies. This not only saves time, but also increases the overall quality of customer service.

Sample scenario: the client approaches the guard to ask if there are any vacancies. Instead of looking for a parking space, a security guard checks the system and can immediately direct the client to a free space. If there are no vacancies, the client will be immediately notified of this, which will avoid unnecessary searches. As can be seen in the following figure (Figure 2.), empty and non-empty positions were highlighted in green and red, respectively. This informs us that when a car arrives inside the frame, it changes its color to red and is busy.

Figure 2. Highlighting spaces.

The plan ahead, I will say that my main focus is to increase the scalability of the system. It currently works effectively in small parking lots, but I plan to expand its capabilities for large level garages. To achieve this goal, I will consider implementing additional features, such as cloud storage and the use of multiple cameras in large areas. In addition, the integration of the system with mobile applications allows customers to check free parking spaces before their arrival. Another potential breakthrough is the use of predictive analytics to predict the availability of parking spaces based on historical data and current trends. This project combines advanced object recognition algorithms with practical applications used in the real world, offering a valuable solution for modern parking control systems. Continuing to improve the algorithm and explore new features, this project will be able to significantly affect the infrastructure of the "Smart City" and increase the convenience of use in urban conditions.

We can safely say that there is a huge potential in the future of Yolo and neural networks. YOLO's real-time capabilities will continue to evolve as efficiency, accuracy, and adaptability to complex environments improve. The integration of neural networks with Advanced Computing, 5G and IoT will further expand the possibility of using YOLO in "smart cities", autonomous vehicles and robotics.

Future neural networks may use self-learning, reinforcement learning, and neuromorphic computing, expanding the boundaries of intelligence, scaling, and real-time interaction in the virtual and physical worlds. YOLO's adaptation to new architectures and its application to dynamic tasks such as 3D object detection, predictive modeling and even cross-modal recognition (combining vision with other senses such as sound or touch) will play a key role in shaping future technological advances. In addition, sharing transformers with cnns can lead to hybrid models that combine the capabilities of Yolo and transformer models, resulting in improved performance. Researchers focus on optimizing models to facilitate deployment to advanced devices while maintaining high accuracy, making target detection ubiquitous in a variety of fields, from health to safety. Thus, the future of YOLO and neural networks is inextricably linked with innovations in hardware, network architecture and interdisciplinary learning, which opens a new era of automation, intelligence and adaptation based on artificial intelligence.

In conclusion, the parking meter project, which uses the Yolo object detection model, demonstrates a strong synergy between neural networks and real-time computer vision for practical applications. The project not only provided real-time parking spot detection, but also demonstrated the importance of model optimization, scalability, and system integration for real-world implementation. The speed, accuracy and versatility of Yolo make it an ideal solution for parking control systems, and its widespread use in areas such as "Smart Cities", Security and autonomous vehicles underlines its importance in the world of computer vision. As we move towards an increasingly interconnected and automated future, the role of neural networks and object detection models such as YOLO will only increase. The achievements achieved through this project will pave the way for future innovations in which artificial intelligence-controlled systems will play a crucial role in solving complex problems and improving our daily lives.

BIBLIOGRAPHY

1. Neural Network [Electronic resource] // Wikipedia. URL: https://ru.wikipedia.org/wiki/Neural Network (accessed: 01.09.2024).

2. What is Neural Network? [Electronic resource] // Amazon Web Services. URL: https://aws.amazon.com/ru/what-is/neural-network/ (accessed: 05.09.2024).

3. Girshick, R. (2015). Fast R-CNN. Proceedings of the IEEE International Conference on Computer Vision, 1440-1448.

4. How to use neural networks [Electronic resource] // Habr. URL: https://habr.com/ru/articles/84015/ (accessed: 09.09.2024).

5. LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436-444.

6. YOLO Object Detection Explained [Electronic resource] // DataCamp. URL: https://www.datacamp.com/blog/yolo-object-detection-explained (accessed: 11.09.2024).

7. YOLO Explained [Electronic resource] // Medium. URL: https://medium.com/analytics-vidhya/yolo-explained-5b6f4564f31 (accessed: 12.09.2024).

8. Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. (2016). You Only Look Once: Unified, Real-Time Object Detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 779-788.

9. The Evolution of the YOLO Neural Networks Family [Electronic resource] // Medium. URL: https://medium.com/deelvin-machine-learning/the-evolution-of-the-yolo-neural-networks-family-from-v1-to-v7-48dd98702a3d (accessed: 12.09.2024).

10. A Comparative Study of Various Object Detection Algorithms and Performance Analysis [Electronic resource] // ResearchGate. URL:

https://www.researchgate.net/publication/346346964 A Comparative Study of Various O bject Detection Algorithms and Performance Analysis (accessed: 13.09.2024).

11. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., & Bengio, Y. (2014). Generative Adversarial Nets. Advances in Neural Information Processing Systems, 2672-2680.

12. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770-778

i Надоели баннеры? Вы всегда можете отключить рекламу.