Hierarchical Co-salient Object Detection via Color Names
Jing Lou 1,   Fenglei Xu 1,   Qingyuan Xia 1,   Wankou Yang 2 ,   Mingwu Ren 1 
Nanjing University of Science and Technology        Southeast University
Hierarchical Co-salient Object Detection via Color Names - Pipeline

Figure 1:  Pipeline of the proposed model. SM and Co-SM are abbreviations for saliency map and co-saliency map, respectively.
Abstract

In this paper, a bottom-up and data-driven model is introduced to detect co-salient objects from an image pair. Inspired by the biologically-plausible across-scale architecture, we propose a multi-layer fusion algorithm to extract conspicuous parts from an input image. At each layer, two existing saliency models are first combined to obtain an initial saliency map, which simultaneously codes for the color names based surrounded cue and the background measure based boundary connectivity. Then a global color cue with respect to color names is invoked to refine and fuse single-layer saliency results. Finally, we exploit the color names based distance metric to measure the color consistency between a pair of saliency maps and remove those non-co-salient regions. The proposed model can generate both saliency and co-saliency maps. Experimental results show that our model performs favorably against 14 saliency models and 6 co-saliency models on the Image Pair data set.

Paper
  • Jing Lou, Fenglei Xu, Qingyuan Xia, Wankou Yang*, Mingwu Ren*, “Hierarchical Co-salient Object Detection via Color Names,” in Proceedings of the Asian Conference on Pattern Recognition (ACPR), pp. 1–7, 2017.  
    PDF Bib MATLAB Code Slides (in Chinese)
  •  
  • The developed MATLAB code of CNS [17] will be made available to the public after our manuscript “Exploiting Color Name Space for Salient Object Detection” is accepted.
  •  
  • We provide the saliency/co-saliency maps of the proposed HCN model on the Image Pair dataset [13]. The zip files can be downloaded from GitHub below or Baidu Cloud. We also provide three evaluation measures including Precision-Recall curve, F-measure curve, and Precision-Recall bar in each zip file, please see the help text in the PlotPRF script. If you use any part of our saliency/co-saliency maps or evaluation measures, please cite our paper.

    DatasetImagesResultsReference
    Image Pair105 image pairs (i.e., 210 images)ImagePair_HCNs
    ImagePair_HCNco
    • H. Li and K. N. Ngan, “A Co-Saliency Model of Image Pairs,” IEEE Trans. Image Process., 20(12): 3365–3375, 2011.
Results
Hierarchical Co-salient Object Detection via Color Names - Figure 6

Figure 6:   Performance of the proposed model compared with 14 saliency models (top) and 6 co-saliency models (bottom) on the Image Pair data set. (a) Precision (y-axis) and recall (x-axis) curves. (b) F-measure (y-axis) curves, where the x-axis denotes the fixed threshold T_f \in [0,255]. (c) Precision-recall bars, sorted in ascending order of the F_\beta values obtained by adaptive thresholding.
Hierarchical Co-salient Object Detection via Color Names - Figure 7

Figure 7:  Visual comparison of co-saliency detection results. (a)-(b) Input images and ground truth masks [13]. Co-saliency maps produced using (c) the proposed model, (d) CoIRS [11], (e) CBCS [12], (f) IPCS [13], (g) CSHS [14], (h) SACS [15], and (i) IPTDIM [16], respectively.
Acknowledgments

The authors would like to thank Huan Wang, Andong Wang, Haiyang Zhang, and Wei Zhu for helpful discussions. They also thank Zun Li for providing some evaluation data. This work is supported by the National Natural Science Foundation of China (Nos. 61231014, 61403202, 61703209) and the China Postdoctoral Science Foundation (No. 2014M561654).

References
[3]
Q. Yan, L. Xu, J. Shi, and J. Jia, “Hierarchical saliency detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2013, pp. 1155–1162.
[7]
J. Zhang and S. Sclaroff, “Saliency detection: A boolean map approach,” in Proc. IEEE Int. Conf. Comput. Vis., 2013, pp. 153–160.
[9]
A. Borji, M.-M. Cheng, H. Jiang, and J. Li, “Salient object detection: A benchmark,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 5706–5722, 2015.
[10]
D. Zhang, H. Fu, J. Han, and F. Wu, “A review of co-saliency detection technique: Fundamentals, applications, and challenges,” arXiv:1604.07090v3 [cs.CV], pp. 1–18, 2017.
[13]
H. Li and K. N. Ngan, “A co-saliency model of image pairs,” IEEE Trans. Image Process., vol. 20, no. 12, pp. 3365–3375, 2011.
[17]
J. Lou, H. Wang, L. Chen, Q. Xia, W. Zhu, and M. Ren, “Exploiting color name space for salient object detection,” arXiv:1703.08912 [cs.CV], pp. 1–13, 2017.
[18]
J. van de Weijer, C. Schmid, and J. Verbeek, “Learning color names from real-world images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2007, pp. 1–8.
[19]
M.-M. Cheng, G.-X. Zhang, N. J. Mitra, X. Huang, and S.-M. Hu, “Global contrast based salient region detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2011, pp. 409–416.
[26]
W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2014, pp. 2814–2821.
Latest update:  Sep 10, 2017