Optimization of Vehicle Detection at Intersections Using the YOLOv5 Model

Authors

  • I Wayan Adi Artha Wiguna Magister Program, Departement of Magister Information Systems, Institut Teknologi dan Bisnis STIKOM Bali, Indonesisa
  • Roy Rudolf Huizen Departement of Magister Information Systems, Institut Teknologi dan Bisnis STIKOM Bali, Indonesisa
  • Gede Angga Pradipta Departement of Magister Information Systems, Institut Teknologi dan Bisnis STIKOM Bali, Indonesisa

DOI:

https://doi.org/10.26555/jiteki.v10i4.29309

Keywords:

Traffic Jam, Object Detection, YOLOv5, Model Optimization

Abstract

This study aims to analyze and evaluate the performance of the YOLOv5 model in detecting vehicles at intersections to optimize traffic flow. The methods used in this research include training the YOLOv5 model with traffic datasets collected from various intersections and optimizing hyperparameters to achieve the best detection accuracy. The study results show that the optimized YOLOv5 model can detect multiple types of vehicles with high accuracy. The model achieved a detection accuracy of 85.47% for trucks, 87.12% for pedestrians, 86.54% for buses, 77.20% for cars, 80.48% for motorcycles, and 78.80% for bicycles. Significant improvements in detection performance were achieved compared to the default model. This research concludes that the optimization of the YOLOv5 model is effective in improving vehicle detection accuracy at intersections. Implementing this optimized model can significantly contribute to traffic management, reduce congestion, and improve road safety. It is expected that the implementation of this technology can be more widely applied for more efficient traffic management in various major cities.

References

[1] M. M. Rahman, P. Najaf, M. G. Fields, and J. C. Thill, “Traffic congestion and its urban scale factors: Empirical evidence from American urban areas,” Int. J. Sustain. Transp., vol. 16, no. 5, pp. 406–421, 2022, https://doi.org/10.1080/15568318.2021.1885085.

[2] D. Iskandar Mulyana and M. A. Rofik, “Implementasi Deteksi Real Time Klasifikasi Jenis Kendaraan Di Indonesia Menggunakan Metode YOLOV5,” J. Pendidik. Tambusai, vol. 6, no. 3, pp. 13971–13982, 2022, https://doi.org/10.31004/jptam.v6i3.4825.

[3] R. Yang, Y. Yu, "Artificial convolutional neural network in object detection and semantic segmentation for medical imaging analysis," Frontiers in oncology, vol. 11, p. 638182, 2021, https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2021.638182/full.

[4] A. D. Nugroho and W. M. Baihaqi, “Improved YOLOv5 with Backbone Replacement to MobileNet V3s for School Attribute Detection,” SinkrOn, vol. 8, no. 3, pp. 1944–1954, 2023, https://doi.org/10.33395/sinkron.v8i3.12702.

[5] R. Illmawati and Hustinawati, “YOLO v5 untuk Deteksi Nomor Kendaraan di DKI Jakarta,” J. Ilmu Komput. Agri-Informatika, vol. 10, pp. 32–43, 2022, [Online]. Available: https://jurnal.ipb.ac.id/index.php/jika.

[6] A. A. Ouallane, A. Bahnasse, A. Bakali, and M. Talea, “Overview of Road Traffic Management Solutions based on IoT and AI,” Procedia Comput. Sci., vol. 198, no. 2021, pp. 518–523, 2021, https://doi.org/10.1016/j.procs.2021.12.279.

[7] S. Jin and L. Sun, “Application of Enhanced Feature Fusion Applied to YOLOv5 for Ship Detection,” in 2021 33rd Chinese Control and Decision Conference (CCDC), pp. 7242–7246, 2021, https://doi.org/10.1109/CCDC52312.2021.9602100.

[8] A. Heri, S. Budi, M. A. Baiquni, B. Mulyanti, and M. Fadli, “Sistem Deteksi Laju dan Plat Nomor Kendaraan Berbasis Video Rekaman Menggunakan YOLOv5 - DeepSORT dan HyperLPR Video Recording Based Vehicle Number Plate and Speed Detection System Using YOLOv5-DeepSORT and HyperLPR,” vol. 11, no. 2, 2023.

[9] M. R. Rais, F. Utaminingrum, and H. Fitriyah, “Sistem Pengenalan Plat Nomor Kendaraan untuk Akses Perumahan menggunakan YOLOv5 dan Pytesseract berbasis Jetson Nano,” J. Pengemb. Teknol. Inf. dan Ilmu Komput., vol. 7, no. 2, pp. 681–685, 2023, [Online]. Available: https://j-ptiik.ub.ac.id/index.php/j-ptiik/article/view/12282.

[10] M. Dio Riza Pratama, B. Priyatna, S. S. Hilabi, and A. L. Hananto, “Deteksi Objek Kecelakaan Pada Kendaraan Roda Empat Menggunakan Algoritma YOLOv5,” J. Ilm. Sist. Informas, vol. 12, no. 2, pp. 15–26, 2022.

[11] R. Dwiyanto, D. W. Widodo, and P. Kasih, “Implementasi Metode You Only Look Once ( YOLOv5 ) Untuk Klasifikasi Kendaraan Pada CCTV Kabupaten Tulungagung,” Semin. Nas. Inov. Teknol., vol. 1, no. 1, pp. 102–104, 2022.

[12] J. A. A. Silva, J. C. López, C. P. Guzman, N. B. Arias, M. J. Rider, and L. C. P. da Silva, “An IoT-based energy management system for AC microgrids with grid and security constraints,” Appl. Energy, vol. 337, p. 120904, 2023, https://doi.org/10.1016/j.apenergy.2023.120904.

[13] X. Liu and others, “Traffic sign recognition algorithm based on improved YOLOv5,” in 2021 International Conference on Control, Automation and Information Sciences (ICCAIS), pp. 980–985, 2021, https://doi.org/10.1109/ICCAIS52680.2021.9624657.

[14] I. S. Isa, M. S. A. Rosli, U. K. Yusof, M. I. F. Maruzuki, and S. N. Sulaiman, “Optimizing the Hyperparameter Tuning of YOLOv5 for Underwater Detection,” IEEE Access, vol. 10, pp. 52818–52831, 2022, https://doi.org/10.1109/ACCESS.2022.3174583.

[15] Z. Zheng, Z. Wang, L. Zhu, and H. Jiang, “Determinants of the congestion caused by a traffic accident in urban road networks,” Accid. Anal. Prev., vol. 136, no. May 2019, p. 105327, 2020, https://doi.org/10.1016/j.aap.2019.105327.

[16] FSMVU, “Street View Dataset,” Roboflow Universe. Roboflow, Sep. 2023. [Online]. Available: https://universe.roboflow.com/fsmvu/street-view-gdogo.

[17] E. Husni et al., “Microclimate investigation of vehicular traffic on the urban heat island through IoT-Based device,” Heliyon, vol. 8, no. 11, p. e11739, 2022, https://doi.org/10.1016/j.heliyon.2022.e11739.

[18] D. M. Tan and L. M. Kieu, “TRAMON: An automated traffic monitoring system for high density, mixed and lane-free traffic,” IATSS Res., vol. 47, no. 4, pp. 468–481, 2023, https://doi.org/10.1016/j.iatssr.2023.10.001.

[19] C. Yu and Y. Shin, “SAR ship detection based on improved YOLOv5 and BiFPN,” ICT Express, vol. 10, no. 1, pp. 28-33, 2023, https://doi.org/10.1016/j.icte.2023.03.009.

[20] C. Jiang et al., “Object detection from UAV thermal infrared images and videos using YOLO models,” Int. J. Appl. Earth Obs. Geoinf., vol. 112, p. 102912, 2022, https://doi.org/10.1016/j.jag.2022.102912.

[21] S. Jafari, Z. Shahbazi, Y. C. Byun, "Improving the road and traffic control prediction based on fuzzy logic approach in multiple intersections," Mathematics, vol. 10, no. 16, p. 2832, 2022, https://doi.org/10.3390/math10162832.

[22] A. Navarro-Espinoza et al., “Traffic Flow Prediction for Smart Traffic Lights Using Machine Learning Algorithms,” Technologies, vol. 10, no. 1, pp. 1–11, 2022, https://doi.org/10.3390/technologies10010005.

[23] F. Rasheed, K. L. A. Yau, R. M. Noor, and Y. W. Chong, “Deep reinforcement learning for addressing disruptions in traffic light control,” Comput. Mater. Contin., vol. 71, no. 2, pp. 2225–2247, 2022, https://doi.org/10.32604/cmc.2022.022952.

[24] H. Dawami, E. Rachmawati, and M. D. Sulistiyo, “Deteksi Penggunaan Masker Wajah Menggunakan YOLOv5,” e-Proceeding Eng., vol. 10, no. 2, pp. 1746–1764, 2023.

[25] F. Gazzawe and M. Albahar, “Reducing traffic congestion in Makkah during Hajj through the use of AI technology,” Heliyon, vol. 10, no. 1, p. e23304, 2024, https://doi.org/10.1016/j.heliyon.2023.e23304.

[26] J. Ieamsaard, S. N. Charoensook, and S. Yammen, “Deep Learning-based Face Mask Detection Using YoloV5,” Proceeding 2021 9th Int. Electr. Eng. Congr. iEECON, pp. 428–431, 2021, https://doi.org/10.1109/iEECON51072.2021.9440346.

[27] F. Xu, H. Wang, X. Sun, X. Fu, "Refined marine object detector with attention-based spatial pyramid pooling networks and bidirectional feature fusion strategy," Neural Computing and Applications, vol. 34, no. 17, pp. 14881-14894, 2022, https://doi.org/10.1007/s00521-022-07264-8.

[28] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, “CSPNet: A new backbone that can enhance the learning capability of CNN,” Proc. IEEE/CVF Conf. Comput. Vis. pattern Recognit. Work., pp. 390–391, 2020, https://doi.org/10.1109/CVPRW50498.2020.00203.

[29] Z. He et al., “Structure-aware residual pyramid network for monocular depth estimation,” IEEE Trans. Intell. Transp. Syst., vol. 22, no. 10, pp. 6584–6594, 2021, https://doi.org/10.48550/arXiv.1907.06023.

[30] J. Liu, X. Zhang, Z. Li, T. Mao, "Multi-scale residual pyramid attention network for monocular depth estimation," In 2020 25th International Conference on Pattern Recognition (ICPR), pp. 5137-5144, 2021, https://doi.org/10.1109/ICPR48806.2021.9412670.

[31] G. Jocher, A. Chaurasia, J. Qiu, and L. Stoken, “YOLOv5,” GitHub Repos., 2020, [Online]. Available: https://github.com/ultralytics/yolov5.

[32] S. Elfwing, E. Uchibe, and K. Doya, “Sigmoid-weighted linear units for neural network function approximation in reinforcement learning,” Neural Networks, vol. 107, pp. 3–11, 2021, https://doi.org/10.1016/j.neunet.2017.12.012.

[33] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal speed and accuracy of object detection,” arXiv Prepr. arXiv2004.10934, 2020, https://doi.org/10.48550/arXiv.2004.10934.

[34] A. M. Roy, R. Bose, J. Bhaduri, "A fast accurate fine-grain object detection model based on YOLOv4 deep neural network," Neural Computing and Applications, vol. 34, no. 5, pp. 3895-3921, 2022, https://doi.org/10.1007/s00521-021-06651-x.

[35] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified, real-time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2019, https://doi.org/10.1109/CVPR.2016.91.

[36] M. H. Ashar and D. Suarna, “KLIK: Kajian Ilmiah Informatika dan Komputer Implementasi Algoritma YOLOv5 dalam Mendeteksi Penggunaan Masker Pada Kantor Biro Umum Gubernur Sulawesi Barat,” Media Online, vol. 3, no. 3, pp. 298–302, 2022, [Online]. Available: https://djournals.com/klik.

[37] C. Ren, Y. Zhang, F. Xu, F. Yuan, and F.-Y. Wang, “Benchmarking deep learning models for traffic sign detection and classification under challenging weather conditions,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 11, pp. 4627–4636, 2020.

[38] C. Wu, S. Zhang, J. Yu, M. Wang, and Q. Li, “A comprehensive review on neural network-based approaches for image captioning,” IEEE Trans. Neural Networks Learn. Syst., vol. 31, no. 12, pp. 4999–5015, 2020, https://doi.org/10.1109/TNNLS.2019.2955165.

[39] X. Weng and K. Kitani, “Beyond Online Tracking: A Survey of Online Object Tracking,” IEEE Trans. Intell. Transp. Syst., vol. 21, no. 7, pp. 2896–2915, 2020.

[40] A. Dosovitskiy et al., “An image is worth 16x16 words: Transformers for image recognition at scale,” Int. Conf. Learn. Represent., 2021, https://cir.nii.ac.jp/crid/1370580229800306183.

Downloads

Published

2025-01-15

How to Cite

[1]
I. W. A. A. Wiguna, R. R. Huizen, and G. A. Pradipta, “Optimization of Vehicle Detection at Intersections Using the YOLOv5 Model”, J. Ilm. Tek. Elektro Komput. Dan Inform, vol. 10, no. 4, pp. 885–896, Jan. 2025.

Issue

Section

Articles

Similar Articles

<< < 1 2 3 4 > >> 

You may also start an advanced similarity search for this article.