Exploiting Color Name Space for Salient Object Detection
(SUBMITTED)
Jing Lou,   Huan Wang,   Longtao Chen,   Qingyuan Xia,   Wei Zhu,   Mingwu Ren
Nanjing University of Science and Technology
Exploiting Color Name Space for Salient Object Detection - Framework

Figure 2.  Framework of the proposed CNS model.
Abstract

In this paper, we will investigate the contribution of color names for salient object detection. Each input image is first converted to the color name space, which is consisted of 11 probabilistic channels. By exploring the topological structure relationship between the figure and ground, we obtain a saliency map through a linear combination of a set of sequential attention maps. To overcome the limitation of only exploiting surroundedness cue, two global cues with respect to color names are invoked for guiding the computation of another weighted saliency map. Finally, we integrate two saliency maps into a unified framework to infer saliency results. In addition, an improved post-processing procedure is introduced to effectively suppress backgrounds while uniformly highlight salient objects. Experimental results show that the proposed model produces more accurate saliency maps and performs well against 23 saliency models in terms of three evaluation metrics on three public datasets.

Manuscript
  •   Jing Lou, Huan Wang, Longtao Chen, Qingyuan Xia, Wei Zhu, Mingwu Ren*, “Exploiting Color Name Space for Salient Object Detection,” arXiv:1703.08912 [cs.CV], pp. 1–13, 2017.  PDF Bib
  •  
  • The developed MATLAB code and all saliency maps will be made available to the public after our manuscript is accepted.

Results
Exploiting Color Name Space for Salient Object Detection - Figure 8

Figure 8.  Visual comparison of salient object detection results. Top three rows, middle two rows, and bottom three rows are images from the ASD, ECSSD, and ImgSal datasets, respectively. (a) Input images, and (b) their ground truth masks. Saliency maps produced using (c) the proposed CNS model, and (d)–(m) other ten saliency models.
Acknowledgments

The work of J. Lou, L. Chen, W. Zhu, and M. Ren was supported by the National Natural Science Foundation of China under Grant 61231014. H. Wang was supported in part through the National Defense Pre-research Foundation of China under Grant 9140A01060115BQ02002. Q. Xia was supported by the National Natural Science Foundation of China under Grant 61403202, and the China Postdoctoral Science Foundation (No. 2014M561654). The authors thank Andong Wang, Fenglei Xu, and Haiyang Zhang for helpful discussions regarding this manuscript.

Latest update:  Mar 28, 2017