Facial recognition using deep learning
Keywords:
Classification, Facial Recognition, Convolutional, Neural NetworkAbstract
In this article, the researcher presented the results of recognition of four emotional states (happy, sad, angry, and disgust) based on facial expressions. A deep learning method with a Convolutional Neural Network algorithm for recognizing problems has been proven very effective way to overcome the recognition problem. A comparative study is carried out using MUAD3D dataset from Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak for evaluating accuracy performance of this dataset. More discussion is provided to prove the effectiveness of the Convolutional Neural Network in recognition problems.References
J. Kumari, R. Rajesh, and K. M. Pooja, “Facial Expression Recognition: A Survey,†Procedia Comput. Sci., vol. 58, pp. 486–491, 2015 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1877050915021225
I. M. Revina and W. R. S. Emmanuel, “A Survey on Human Face Expression Recognition Techniques,†J. King Saud Univ. - Comput. Inf. Sci., Sep. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1319157818303379
F. Becerra-Riera, A. Morales-González, and H. Méndez-Vázquez, “Facial marks for improving face recognition,†Pattern Recognit. Lett., vol. 113, pp. 3–9, Oct. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0167865517301423
W. Deng, Y. Fang, Z. Xu, and J. Hu, “Facial landmark localization by enhanced convolutional neural network,†Neurocomputing, vol. 273, pp. 222–229, Jan. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0925231217313668
Y. Liu, X. Yuan, X. Gong, Z. Xie, F. Fang, and Z. Luo, “Conditional convolution neural network enhanced random forest for facial expression recognition,†Pattern Recognit., vol. 84, pp. 251–261, Dec. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0031320318302516
A. T. Lopes, E. de Aguiar, A. F. De Souza, and T. Oliveira-Santos, “Facial expression recognition with Convolutional Neural Networks: Coping with few data and the training sample order,†Pattern Recognit., vol. 61, pp. 610–628, Jan. 2017 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0031320316301753
M. McCurrie, F. Beletti, L. Parzianello, A. Westendorp, S. Anthony, and W. J. Scheirer, “Convolutional Neural Networks for Subjective Face Attributes,†Image Vis. Comput., vol. 78, pp. 14–25, Oct. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0262885618301069
D. Sáez Trigueros, L. Meng, and M. Hartnett, “Enhancing convolutional neural networks for face recognition with occlusion maps and batch triplet loss,†Image Vis. Comput., vol. 79, pp. 99–108, Nov. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0262885618301562
S. Brahimi, N. Ben Aoun, and C. Ben Amar, “Boosted Convolutional Neural Network for object recognition at large scale,†Neurocomputing, vol. 330, pp. 337–354, Feb. 2019 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0925231218313596
C. Du, S. Gao, Y. Liu, and B. Gao, “Multi-focus image fusion using deep support value convolutional neural network,†Optik (Stuttg)., vol. 176, pp. 567–578, Jan. 2019 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0030402618313925
Y. Fu and C. Aldrich, “Flotation froth image recognition with convolutional neural networks,†Miner. Eng., vol. 132, pp. 183–190, Mar. 2019 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0892687518305533
Y. Seo and K. Shin, “Hierarchical convolutional neural networks for fashion image classification,†Expert Syst. Appl., vol. 116, pp. 328–339, Feb. 2019 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0957417418305992
B. B. Traore, B. Kamsu-Foguem, and F. Tangara, “Deep convolution neural network for image recognition,†Ecol. Inform., vol. 48, pp. 257–268, Nov. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1574954118302140
H. Yang et al., “Asymmetric 3D Convolutional Neural Networks for action recognition,†Pattern Recognit., vol. 85, pp. 1–12, Jan. 2019 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0031320318302632
P. V. Arun, I. Herrmann, K. M. Budhiraju, and A. Karnieli, “Convolutional network architectures for super-resolution/sub-pixel mapping of drone-derived images,†Pattern Recognit., vol. 88, pp. 431–446, Apr. 2019 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0031320318304217
H. Tang, B. Xiao, W. Li, and G. Wang, “Pixel convolutional neural network for multi-focus image fusion,†Inf. Sci. (Ny)., vol. 433–434, pp. 125–141, Apr. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0020025517311647
M. Yan, J. Guo, W. Tian, and Z. Yi, “Symmetric convolutional neural network for mandible segmentation,†Knowledge-Based Syst., vol. 159, pp. 63–71, Nov. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0950705118302983
H. Wu, J. Weng, X. Chen, and W. Lu, “Feedback weight convolutional neural network for gait recognition,†J. Vis. Commun. Image Represent., vol. 55, pp. 424–432, Aug. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1047320318301445
Q. Zhang, M. Zhang, T. Chen, Z. Sun, Y. Ma, and B. Yu, “Recent advances in convolutional neural network acceleration,†Neurocomputing, vol. 323, pp. 37–51, Jan. 2019 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0925231218311007
A. J. O’Toole, C. D. Castillo, C. J. Parde, M. Q. Hill, and R. Chellappa, “Face Space Representations in Deep Convolutional Neural Networks,†Trends Cogn. Sci., vol. 22, no. 9, pp. 794–809, Sep. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1364661318301463
G. Yao, T. Lei, and J. Zhong, “A review of Convolutional-Neural-Network-based action recognition,†Pattern Recognit. Lett., vol. 118, pp. 14–22, Feb. 2019 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0167865518302058
K. O’Shea and R. Nash, “An Introduction to Convolutional Neural Networks,†pp. 1–11, 2015 [Online]. Available: http://arxiv.org/abs/1511.08458
J. Gu et al., “Recent advances in convolutional neural networks,†Pattern Recognit., vol. 77, pp. 354–377, May 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0031320317304120
G. Lin and W. Shen, “Research on convolutional neural network based on improved Relu piecewise activation function,†Procedia Comput. Sci., vol. 131, pp. 977–984, 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1877050918306197
Y.-D. Zhang, C. Pan, J. Sun, and C. Tang, “Multiple sclerosis identification by convolutional neural network with dropout and parametric ReLU,†J. Comput. Sci., vol. 28, pp. 1–10, Sep. 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1877750318305763
T. Rao, X. Li, H. Zhang, and M. Xu, “Multi-level region-based Convolutional Neural Network for image emotion classification,†Neurocomputing, vol. 333, pp. 429–439, Mar. 2019 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S0925231218315145
A. Savchenkov, A. Davis, and X. Zhao, “Generalized Convolutional Neural Networks for Point Cloud Data,†2017 16th IEEE Int. Conf. Mach. Learn. Appl., pp. 930–935, 2017 [Online]. Available: http://ieeexplore.ieee.org/document/8260757/
N. Sharma, V. Jain, and A. Mishra, “An Analysis Of Convolutional Neural Networks For Image Classification,†Procedia Comput. Sci., vol. 132, pp. 377–384, 2018 [Online]. Available: https://linkinghub.elsevier.com/retrieve/pii/S1877050918309335
Downloads
Published
Issue
Section
License
Authors who publish with Jurnal Informatika (JIFO) agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License (CC BY-SA 4.0) that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgement of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.