Hierarchical Co-salient Object Detection via Color Names
Jing Lou 1,   Fenglei Xu 1,   Qingyuan Xia 1,   Wankou Yang 2 ,   Mingwu Ren 1 
Nanjing University of Science and Technology        Southeast University
Hierarchical Co-salient Object Detection via Color Names - Pipeline

Figure 1:  Pipeline of the proposed model. SM and Co-SM are abbreviations for saliency map and co-saliency map, respectively.
Abstract

In this paper, a bottom-up and data-driven model is introduced to detect co-salient objects from an image pair. Inspired by the biologically-plausible across-scale architecture, we propose a multi-layer fusion algorithm to extract conspicuous parts from an input image. At each layer, two existing saliency models are first combined to obtain an initial saliency map, which simultaneously codes for the color names based surrounded cue and the background measure based boundary connectivity. Then a global color cue with respect to color names is invoked to refine and fuse single-layer saliency results. Finally, we exploit the color names based distance metric to measure the color consistency between a pair of saliency maps and remove those non-co-salient regions. The proposed model can generate both saliency and co-saliency maps. Experimental results show that our model performs favorably against 14 saliency models and 6 co-saliency models on the Image Pair data set.

Paper
Results
Hierarchical Co-salient Object Detection via Color Names - Figure 6

Figure 6:   Performance of the proposed model compared with 14 saliency models (top) and 6 co-saliency models (bottom) on the Image Pair data set. (a) Precision (y-axis) and recall (x-axis) curves. (b) F-measure (y-axis) curves, where the x-axis denotes the fixed threshold T_f \in [0,255]. (c) Precision-recall bars, sorted in ascending order of the F_\beta values obtained by adaptive thresholding.
Hierarchical Co-salient Object Detection via Color Names - Figure 7

Figure 7:  Visual comparison of co-saliency detection results. (a)-(b) Input images and ground truth masks [13]. Co-saliency maps produced using (c) the proposed model, (d) CoIRS [11], (e) CBCS [12], (f) IPCS [13], (g) CSHS [14], (h) SACS [15], and (i) IPTDIM [16], respectively.
Acknowledgments

The authors would like to thank Huan Wang, Andong Wang, Haiyang Zhang, and Wei Zhu for helpful discussions. They also thank Zun Li for providing some evaluation data. This work is supported by the National Natural Science Foundation of China (Nos. 61231014, 61403202, 61703209) and the China Postdoctoral Science Foundation (No. 2014M561654).

References
[3]
Q. Yan, L. Xu, J. Shi, and J. Jia, “Hierarchical saliency detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2013, pp. 1155–1162.
[7]
J. Zhang and S. Sclaroff, “Saliency detection: A boolean map approach,” in Proc. IEEE Int. Conf. Comput. Vis., 2013, pp. 153–160.
[9]
A. Borji, M.-M. Cheng, H. Jiang, and J. Li, “Salient object detection: A benchmark,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 5706–5722, 2015.
[10]
D. Zhang, H. Fu, J. Han, and F. Wu, “A review of co-saliency detection technique: Fundamentals, applications, and challenges,” arXiv preprint, pp. 1–18, 2017. https://arxiv.org/abs/1604.07090
[13]
H. Li and K. N. Ngan, “A co-saliency model of image pairs,” IEEE Trans. Image Process., vol. 20, no. 12, pp. 3365–3375, 2011.
[17]
J. Lou, H. Wang, L. Chen, Q. Xia, W. Zhu, and M. Ren, “Exploiting color name space for salient object detection,” arXiv preprint, pp. 1–13, 2017. https://arxiv.org/abs/1703.08912
[18]
J. van de Weijer, C. Schmid, and J. Verbeek, “Learning color names from real-world images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2007, pp. 1–8.
[19]
M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu, “Global contrast based salient region detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2011, pp. 409–416.
[26]
W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2014, pp. 2814–2821.
Latest update:  Jun 19, 2018