Understanding user intention in image retrieval using multiple concept hierarchies

Abdelmadjid Youcefa, Mohammed Lamine KHERFI, Belal Khaldi, Oussama Aiadi

Abstract


Image retrieval is the technique that helps Users to find and retrieve desired images from a large image dataset. The user has firstly to formulate a query that expresses his/her needs.  This query may come in textual form as in semantic retrieval, in visual example form as in query by visual example, or as a combination of these two forms named query by semantic example. The focus of this paper lies in the techniques of analysing queries composed of multiple semantic examples. This is a very challenging task, to solve such a problem, we introduce a model based on Bayesian generalization. In cognitive science, Bayesian generalization, which is the base of most works in literature, is a method that tries to find, in one hierarchy of concepts, the parent concept of a given set of concepts. In addition and instead of using one single concept hierarchy, we propose a generalization so it can be used with multiple hierarchies where each one has a different semantic context and contains several abstraction levels. Experimental evaluations demonstrate that our method, which uses multiple hierarchies, yields better results than those using only one single hierarchy.

 


Keywords


Image Retrieval;Machine learning;Image understanding

References


M. S. Lew, N. Sebe, C. Djeraba et al., “Content-based multimedia information retrieval: State of the art and challenges,” ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), vol. 2, no. 1, pp. 1-19, 2006.

L. Yang, R. Jin, L. Mummert et al., “A boosting framework for visuality-preserving distance metric learning and its application to medical image retrieval,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 1, pp. 30-44, 2010.

M. L. Kherfi, D. Ziou, and A. Bernardi, "Learning from negative example in relevance feedback for content-based image retrieval." pp. 933-936.

M. L. Kherfi, D. Ziou, and A. Bernardi, “Combining positive and negative examples in relevance feedback for content-based image retrieval,” Journal of Visual Communication and Image Representation, vol. 14, no. 4, pp. 428-457, 2003.

L. Chen, D. Xu, I. W. Tsang et al., "Tag-based web photo retrieval improved by batch mode re-tagging." pp. 3440-3446.

Y. Liu, D. Xu, I. W. Tsang et al., “Textual query of personal photos facilitated by large-scale web data,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 5, pp. 1022-1036, 2011.

B. Khaldi, and M. L. Kherfi, “Modified integrative color intensity co-occurrence matrix for texture image representation,” Journal of Electronic Imaging, vol. 25, no. 5, pp. 053007-053007, 2016.

L. Wu, S. C. Hoi, R. Jin et al., "Distance metric learning from uncertain side information with application to automated photo tagging." pp. 135-144.

D. G. Lowe, "Object recognition from local scale-invariant features." pp. 1150-1157.

O. Aiadi, and M. L. Kherfi, “A new method for automatic date fruit classification,” International Journal of Computational Vision and Robotics, vol. 7, no. 6, pp. 692-711, 2017.

N. Rasiwasia, P. J. Moreno, and N. Vasconcelos, “Bridging the gap: Query by semantic example,” IEEE Transactions on Multimedia, vol. 9, no. 5, pp. 923-938, 2007.

J. Deng, J. Krause, A. C. Berg et al., "Hedging your bets: Optimizing accuracy-specificity trade-offs in large scale visual recognition." pp. 3450-3457.

A. Quattoni, M. Collins, and T. Darrell, "Transfer learning for image classification with sparse prototype representations." pp. 1-8.

R. Salakhutdinov, A. Torralba, and J. Tenenbaum, "Learning to share visual appearance for multiclass object detection." pp. 1481-1488.

J. B. Tenenbaum, "Rules and similarity in concept learning." pp. 59-65.

J. B. Tenenbaum, and T. L. Griffiths, “Generalization, similarity, and Bayesian inference,” Behavioral and brain sciences, vol. 24, no. 4, pp. 629-640, 2001.

F. Xu, and J. B. Tenenbaum, “Word learning as Bayesian inference,” Psychological review, vol. 114, no. 2, pp. 245, 2007.

J. Abbott, J. Austerweil, and T. Griffiths, "Constructing a hypothesis space from the Web for large-scale Bayesian word learning."

S. Carey, The child as word learner: na, 1978.

Y. Jia, J. T. Abbott, J. L. Austerweil et al., "Visual concept learning: Combining machine vision and bayesian generalization on concept hierarchies." pp. 1842-1850.

N. Verma, D. Mahajan, S. Sellamanickam et al., "Learning hierarchical similarity metrics." pp. 2280-2287.

Z. Yan, H. Zhang, R. Piramuthu et al., "HD-CNN: hierarchical deep convolutional neural networks for large scale visual recognition." pp. 2740-2748.

J. Glick, and K. Miller, "Insect Classification With Heirarchical Deep Convolutional Neural Networks," 2016.

J. Deng, A. C. Berg, and L. Fei-Fei, "Hierarchical semantic indexing for large scale image retrieval." pp. 785-792.

J. Deng, W. Dong, R. Socher et al., "Imagenet: A large-scale hierarchical image database." pp. 248-255.

C. Fellbaum, "WordNet. An electronic lexical database. Massachusetts," Cambridge: MIT Press, 1998.




DOI: http://dx.doi.org/10.12928/telkomnika.v17i5.10202

Article Metrics

Abstract view : 23 times

Refbacks

  • There are currently no refbacks.


Copyright (c) 2019 Universitas Ahmad Dahlan

TELKOMNIKA Telecommunication, Computing, Electronics and Control
ISSN: 1693-6930, e-ISSN: 2302-9293
Universitas Ahmad Dahlan, 4th Campus, 9th Floor, LPPI Room
Jl. Ringroad Selatan, Kragilan, Tamanan, Banguntapan, Bantul, Yogyakarta, Indonesia 55191
Phone: +62 (274) 563515, 511830, 379418, 371120 ext. 4902, Fax: +62 274 564604

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

View TELKOMNIKA Stats