Cover Image

Proposal of Image generation model using cGANs for sketching faces

Nguyen Phat Huu, Nguyet Giap Thi

Abstract


The transition from sketches to realistic images of human faces has an important application in criminal investigation science to find criminals as depicted by witnesses. However, due to the difference between the sketch image and the real face image in terms of image detail and color, it is challenging and takes time to transform from hand-drawn sketches to actual faces. To solve this problem, we propose an image generation model using the conditional generative adversarial network with autoencoder (cGANs-AE) model to generate synthetic samples for variable length and multi-feature sequence datasets. The goal of the model is to learn how to encode a dataset that reduces its vector size. Using a vector with reducing the dimension, the autoencoder will have to recreate the image similar to the original image. The autoencoder aims to produce output as input and focus only on the essential features. Raw sketches over the cGANS create realistic images that quickly and easily make the sketch images raw images. The results show that the model achieves high accuracy of up to 75%, and PSNR is 25.5 dB that is potentially applicable for practice with only 606 face images. The performance of our proposed architecture is compared with other solutions, and the results show that our proposal obtains competitive performance in terms of output quality (25.5 dB) and efficiency (above 75%).

Keywords


GANs; cGANs; CNN; Sketching faces; Image processing

Full Text:

PDF

References


Y. Jo and J. Park, “SC-FEGAN: Face Editing Generative Adversarial Network With User’s Sketch and Color,†2019 IEEE/CVF Int. Conf. Comput. Vis., pp. 1745–1753, Oct. 2019, doi: 10.1109/ICCV.2019.00183.

I. J. Goodfellow et al., “Generative Adversarial Nets,†Adv. Neural Inf. Process. Syst., pp. 2672–2680, Jun. 2014, doi: 10.5555/2969033.2969125.

C.-H. Lee, Z. Liu, L. Wu, and P. Luo, “MaskGAN: Towards Diverse and Interactive Facial Image Manipulation,†2020 IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 5548–5557, Jun. 2020, doi: 10.1109/CVPR42600.2020.00559.

M. Wang et al., “Example-Guided Style-Consistent Image Synthesis From Semantic Labeling,†2019 IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 1495–1504, Jun. 2019, doi: 10.1109/CVPR.2019.00159.

D. Pathak, P. Krahenbuhl, J. Donahue, T. Darrell, and A. A. Efros, “Context Encoders: Feature Learning by Inpainting,†2016 IEEE Conf. Comput. Vis. Pattern Recognit., pp. 2536–2544, Jun. 2016, doi: 10.1109/CVPR.2016.278.

P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,†2017 IEEE Conf. Comput. Vis. Pattern Recognit., pp. 5967–5976, Jul. 2017, doi: 10.1109/CVPR.2017.632.

T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz, and B. Catanzaro, “High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs,†2018 IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 8798–8807, Jun. 2018, doi: 10.1109/CVPR.2018.00917.

J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks,†2017 IEEE Int. Conf. Comput. Vis., pp. 2242–2251, Oct. 2017, doi: 10.1109/ICCV.2017.244.

D. Wu and Q. Dai, “Sketch realizing: lifelike portrait synthesis from sketch,†Proc. 2009 Comput. Graph. Int. Conf., pp. 13–20, 2009, doi: 10.1145/1629739.1629741.

H. V. Dinh, “Building database of human to apply to the portrait of the criminal through descriptions of witnesses and victims,†Quang Ninh province police, 2017. http://cstc.cand.com.vn.

S. A. Israel et al., “Generative Adversarial Networks for Classification,†2017 IEEE Appl. Imag. Pattern Recognit. Work., pp. 1–4, Oct. 2017, doi: 10.1109/AIPR.2017.8457952.

L. Gonog and Y. Zhou, “A Review: Generative Adversarial Networks,†2019 14th IEEE Conf. Ind. Electron. Appl., pp. 505–510, Jun. 2019, doi: 10.1109/ICIEA.2019.8833686.

Y.-J. Cao et al., “Recent Advances of Generative Adversarial Networks in Computer Vision,†IEEE Access, vol. 7, pp. 14985–15006, 2019, doi: 10.1109/ACCESS.2018.2886814.

M. A. Souibgui and Y. Kessentini, “DE-GAN: A Conditional Generative Adversarial Network for Document Enhancement,†IEEE Trans. Pattern Anal. Mach. Intell., pp. 1–1, 2021, doi: 10.1109/TPAMI.2020.3022406.

J. Wang, X. Li, and J. Yang, “Stacked Conditional Generative Adversarial Networks for Jointly Learning Shadow Detection and Shadow Removal,†2018 IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 1788–1797, Jun. 2018, doi: 10.1109/CVPR.2018.00192.

S. Kim and D. Y. Suh, “Recursive Conditional Generative Adversarial Networks for Video Transformation,†IEEE Access, vol. 7, pp. 37807–37821, 2019, doi: 10.1109/ACCESS.2019.2906472.

N. Hubens, “Deep inside: Autoencoders,†2018. https://towardsdatascience.com/deep-inside-autoencoders-7e41f319999f.

Q. P. Nguyen, K. W. Lim, D. M. Divakaran, K. H. Low, and M. C. Chan, “GEE: A Gradient-based Explainable Variational Autoencoder for Network Anomaly Detection,†2019 IEEE Conf. Commun. Netw. Secur., pp. 91–99, Jun. 2019, doi: 10.1109/CNS.2019.8802833.

J. Xue, P. P. K. Chan, and X. Hu, “Experimental study on stacked autoencoder on insufficient training samples,†2017 Int. Conf. Wavelet Anal. Pattern Recognit., pp. 223–229, Jul. 2017, doi: 10.1109/ICWAPR.2017.8076693.

A. Deshpande, J. Lu, M.-C. Yeh, M. J. Chong, and D. Forsyth, “Learning Diverse Image Colorization,†2017 IEEE Conf. Comput. Vis. Pattern Recognit., pp. 2877–2885, Jul. 2017, doi: 10.1109/CVPR.2017.307.

R. Tyleˇcek, “The CMP Facade Database,†Res. Reports C. Czech Tech. Univ. Prague, No. 24, 2012, pp. 1–8, 2013, [Online]. Available: https://cmp.felk.cvut.cz/~tylecr1/facade/CMP_facade_DB_2013.pdf.

A. Martinez and R. Benavente, “The AR Face Database: CVC Technical Report, 24,†1998. Available: Google Scholar.

K. Messer, J. Matas, J. Kittler, J. Luettin, and G. Maitre, “XM2VTSDB: The extended M2VTS database,†Second Int. Conf. audio video-based biometric Pers. authentication, vol. 964, pp. 965–966, 1999. Available: Google Scholar.

A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. BT - 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings.†2016, [Online]. Available: http://arxiv.org/abs/1511.06434.

Q. Chen and V. Koltun, “Photographic Image Synthesis with Cascaded Refinement Networks,†2017 IEEE Int. Conf. Comput. Vis., pp. 1520–1529, Oct. 2017, doi: 10.1109/ICCV.2017.168.

K. Shmelkov, C. Schmid, and K. Alahari, “How Good Is My GAN?,†Ferrari V., Hebert M., Sminchisescu C., Weiss Y. Comput. Vis. – ECCV 2018. ECCV 2018. Lect. Notes Comput. Sci., vol. 11206, pp. 218–234, 2018, doi: 10.1007/978-3-030-01216-8_14.

S. Gu, J. Bao, H. Yang, D. Chen, F. Wen, and L. Yuan, “Mask-Guided Portrait Editing With Conditional GANs,†2019 IEEE/CVF Conf. Comput. Vis. Pattern Recognit., pp. 3431–3440, Jun. 2019, doi: 10.1109/CVPR.2019.00355.

V. Carvalho, F. Soares, and R. Vasconcelos, “Artificial intelligence and image processing based techniques: A tool for yarns parameterization and fabrics prediction,†2009 IEEE Conf. Emerg. Technol. Fact. Autom., pp. 1–4, Sep. 2009, doi: 10.1109/ETFA.2009.5347255.

P. N. Huu, T. Tran Van, and N. G. Thi, “Proposing distortion compensation algorithm for determining distance using two cameras,†2019 6th NAFOSTED Conf. Inf. Comput. Sci., pp. 172–177, Dec. 2019, doi: 10.1109/NICS48868.2019.9023875.

P. N. Huu, V. Tran-Quang, and T. Miyoshi, “Energy threshold adaptation algorithms on image compression to prolong WSN lifetime,†2010 7th Int. Symp. Wirel. Commun. Syst., pp. 834–838, Sep. 2010, doi: 10.1109/ISWCS.2010.5624318.

S. S. Kumar, F. Taheri, and M. R. Islam, “Artificial Intelligence and Image Processing Approaches in Damage Assessment and Material Evaluation,†Int. Conf. Comput. Intell. Model. Control Autom. Int. Conf. Intell. Agents, Web Technol. Internet Commer., vol. 1, pp. 307–313, doi: 10.1109/CIMCA.2005.1631284.

S. Shukla, A. Lakhmani, and A. K. Agarwal, “Approaches of artificial intelligence in biomedical image processing: A leading tool between computer vision & biological vision,†2016 Int. Conf. Adv. Comput. Commun. Autom., pp. 1–6, Apr. 2016, doi: 10.1109/ICACCA.2016.7578900.

A. K. Rathinam, Y. Lee, D. N. C. Ling, and R. Singh, “A review of image processing leading to artificial intelligence methods to detect instruments in ultrasound guided minimally invasive surgical procedures,†2017 IEEE Int. Conf. Power, Control. Signals Instrum. Eng., pp. 3074–3079, Sep. 2017, doi: 10.1109/ICPCSI.2017.8392290.

X. Jia, “Image recognition method based on deep learning,†2017 29th Chinese Control Decis. Conf., pp. 4730–4735, May 2017, doi: 10.1109/CCDC.2017.7979332.

J. Ruili, W. Haocong, W. Han, E. O’Connell, and S. McGrath, “Smart Parking System Using Image Processing and Artificial Intelligence,†2018 12th Int. Conf. Sens. Technol., pp. 232–235, Dec. 2018, doi: 10.1109/ICSensT.2018.8603590.

W. Chao, L. Chang, X. Wang, J. Cheng, X. Deng, and F. Duan, “High-Fidelity Face Sketch-To-Photo Synthesis Using Generative Adversarial Network,†2019 IEEE Int. Conf. Image Process., pp. 4699–4703, Sep. 2019, doi: 10.1109/ICIP.2019.8803549.




DOI: http://dx.doi.org/10.26555/jifo.v15i2.a20576

Refbacks

  • There are currently no refbacks.


Copyright (c) 2021 Nguyen Phat Huu, Nguyet Giap Thi

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

____________________________________
JURNAL INFORMATIKA

ISSN : 1978-0524 (print) | 2528-6374 (online)

Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

View JIFO stats