Comparative Study of VGG16 and MobileNetV2 for Masked Face Recognition

Authors

  • Faisal Dharma Adhinata Institut Teknologi Telkom Purwokerto http://orcid.org/0000-0002-2624-173X
  • Nia Annisa Ferani Tanjung Institut Teknologi Telkom Purwokerto
  • Widi Widayat Institut Teknologi Telkom Purwokerto
  • Gracia Rizka Pasfica Institut Teknologi Telkom Purwokerto
  • Fadlan Raka Satura Institut Teknologi Telkom Purwokerto

DOI:

https://doi.org/10.26555/jiteki.v7i2.20758

Keywords:

Coronavirus, Face Recognition, MobileNetV2, Transfer Learning, VGG16

Abstract

Indonesia is one of the countries affected by the coronavirus pandemic, which has taken too many lives. The coronavirus pandemic forces us to continue to wear masks daily, especially when working to break the chain of the spread of the coronavirus. Before the pandemic, face recognition for attendance used the entire face as input data, so the results were accurate. However, during this pandemic, all employees use masks, including attendance, which can reduce the level of accuracy when using masks. In this research, we use a deep learning technique to recognize masked faces. We propose using transfer learning pre-trained models to perform feature extraction and classification of masked face image data. The use of transfer learning techniques is due to the small amount of data used. We analyzed two transfer learning models, namely VGG16 and MobileNetV2. The parameters of batch size and number of epochs were used to evaluate each model. The best model is obtained with a batch size value of 32 and the number of epochs 50 in each model. The results showed that using the MobileNetV2 model was more accurate than VGG16, with an accuracy value of 95.42%. The results of this study can provide an overview of the use of transfer learning techniques for masked face recognition.

References

WHO, “World Health Organization. Coronavirus disease 2019 (COVID-19),†2021.

N. P. E. D. Yanti, I. M. A. D. P. Nugraha, G. A. Wisnawa, N. P. D. Agustina, and N. P. A. Diantari, “Public Knowledge about Covid-19 and Public Behavior During the Covid-19 Pandemic,†J. Keperawatan Jiwa, vol. 8, no. 4, p. 491, 2020. https://doi.org/10.26714/jkj.8.4.2020.491-504

S. E. Hwang, J. H. Chang, O. Bumjo, and J. Heo, “Possible Aerosol Transmission of COVID-19 Associated with an Outbreak in an Apartment in Seoul, South Korea, 2020,†Int. J. Infect. Dis., vol. 104, pp. 73–76, 2021. https://doi.org/10.1016/j.ijid.2020.12.035

A. Patil, P. K. P, P. More, A. Joshi, and A. R. Kamble, “Attendance Monitoring using Face Recognition and Machine Learning,†Int. J. Futur. Gener. Commun. Netw., vol. 13, no. 3, pp. 94–102, 2020. https://doi.org/10.2139/ssrn.3529319

M. Surve, P. Joshi, S. Jamadar, and M. Vharkate, “Automatic Attendance System Using Face Recognition Technique,†Int. J. Recent Technol. Eng., vol. 9, no. 1, pp. 2134–2138, 2020. https://doi.org/10.35940/ijrte.A2644.059120

J. Mason, R. Dave, P. Chatterjee, I. Graham-Allen, A. Esterline, and K. Roy, “An Investigation of Biometric Authentication in the Healthcare Environment,†Array, vol. 8, no. August, p. 100042, 2020. https://doi.org/10.1016/j.array.2020.100042

A. Casanova, L. Cascone, A. Castiglione, W. Meng, and C. Pero, “User recognition based on periocular biometrics and touch dynamics,†Pattern Recognit. Lett., vol. 148, pp. 114–120, 2021. https://doi.org/10.1016/j.patrec.2021.05.006

I. Adjabi, A. Ouahabi, A. Benzaoui, and A. Taleb-Ahmed, “Past, present, and future of face recognition: A review,†Electron., vol. 9, no. 8, pp. 1–53, 2020. https://doi.org/10.3390/electronics9081188

A. Elmahmudi and H. Ugail, “Deep face recognition using imperfect facial data,†Futur. Gener. Comput. Syst., vol. 99, pp. 213–225, 2019. https://doi.org/10.1016/j.future.2019.04.025

L. Yang, J. Ma, J. Lian, Y. Zhang, and H. Liu, “Deep representation for partially occluded face verification,†Eurasip J. Image Video Process., vol. 2018, no. 1, 2018. https://doi.org/10.1186/s13640-018-0379-2

F. K. Zaman, A. A. Shafie, and Y. M. Mustafah, “Robust face recognition against expressions and partial occlusions,†Int. J. Autom. Comput., vol. 13, no. 4, pp. 319–337, 2016. https://doi.org/10.1007/s11633-016-0974-6

V. Aswal, O. Tupe, S. Shaikh, and N. N. Charniya, “Single Camera Masked Face Identification,†Proc. - 19th IEEE Int. Conf. Mach. Learn. Appl. ICMLA 2020, pp. 57–60, 2020. https://doi.org/10.1109/ICMLA51294.2020.00018

V. Wiley and T. Lucas, “Computer Vision and Image Processing: A Paper Review,†Int. J. Artif. Intell. Res., vol. 2, no. 1, p. 22, 2018. https://doi.org/10.29099/ijair.v2i1.42

Y. Gultom, A. M. Arymurthy, and R. J. Masikome, “Batik Classification using Deep Convolutional Network Transfer Learning,†J. Ilmu Komput. dan Inf., vol. 11, no. 2, p. 59, 2018. https://doi.org/10.21609/jiki.v11i2.507

A. Brodzicki, M. Piekarski, D. Kucharski, J. Jaworek-Korjakowska, and M. Gorgon, “Transfer Learning Methods as a New Approach in Computer Vision Tasks with Small Datasets,†Found. Comput. Decis. Sci., vol. 45, no. 3, pp. 179–193, 2020. https://doi.org/10.2478/fcds-2020-0010

K. Y. Lum, Y. H. Goh, and Y. Bin Lee, “American sign language recognition based on MobileNetV2,†Adv. Sci. Technol. Eng. Syst., vol. 5, no. 6, pp. 481–488, 2020. https://doi.org/10.25046/aj050657

C. Ruvinga, D. Malathi, and J. D. Dorathi Jayaseeli, “Human concentration level recognition based on vgg16 cnn architecture,†Int. J. Adv. Sci. Technol., vol. 29, no. 6 Special Issue, pp. 1364–1373, 2020. http://sersc.org/journals/index.php/IJAST/article/view/9271

L. Hu and Q. Ge, “Automatic facial expression recognition based on MobileNetV2 in Real-time,†J. Phys. Conf. Ser., vol. 1549, no. 2, 2020. https://doi.org/10.1088/1742-6596/1549/2/022136

W. Dai, Y. Dai, K. Hirota, and Z. Jia, “A Flower Classification Approach with MobileNetV2 and Transfer Learning,†9th Int. Symp. Comput. Intell. Ind. Appl., pp. 1–5, 2020. https://isciia2020.bit.edu.cn/docs/20201114083020836285.pdf

C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,†J. Big Data, vol. 6, no. 1, 2019. https://doi.org/10.1186/s40537-019-0197-0

A. Ahmad, “Mengenal Artificial Intelligence, Machine Learning, Neural Network, dan Deep Learning,†J. Teknol. Indones., no. October, 2017. https://www.academia.edu/download/54674088/Perbedaan_Deep_learn.pdf

R. Primartha, “Belajar Machine Learning; Teori dan Praktik,†2018. http://repo.unikadelasalle.ac.id/index.php?p=show_detail&id=12893&keywords=

F. D. Adhinata, D. P. Rakhmadani, M. Wibowo, and A. Jayadi, “A Deep Learning Using DenseNet201 to Detect Masked or Non-masked Face,†vol. 9, no. 1, pp. 115–121, 2021. https://doi.org/10.30595/juita.v9i1.9624

K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,†3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015. https://arxiv.org/abs/1409.1556v6

K. Gopalakrishnan, S. K. Khaitan, A. Choudhary, and A. Agrawal, “Deep Convolutional Neural Networks with transfer learning for computer vision-based data-driven pavement distress detection,†Constr. Build. Mater., vol. 157, no. September, pp. 322–330, 2017. https://doi.org/10.1016/j.conbuildmat.2017.09.110

W. Nash, T. Drummond, and N. Birbilis, “A review of deep learning in the study of materials degradation,†npj Mater. Degrad., vol. 2, no. 1, pp. 1–12, 2018. https://doi.org/10.1038/s41529-018-0058-x

D. Yu, Q. Xu, H. Guo, C. Zhao, Y. Lin, and D. Li, “An efficient and lightweight convolutional neural network for remote sensing image scene classification,†Sensors (Switzerland), vol. 20, no. 7, 2020. https://doi.org/10.3390/s20071999

M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, “MobileNetV2: Inverted Residuals and Linear Bottlenecks,†Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4510–4520, 2018. https://doi.org/10.1109/CVPR.2018.00474

Downloads

Published

2021-07-20

Issue

Section

Articles