Exploiting Color Name Space for Salient Object Detection

Jing Lou 1 ,   Huan Wang 2,   Longtao Chen 2,   Fenglei Xu 2,   Qingyuan Xia 2,   Wei Zhu 2,   Mingwu Ren 2 
Changzhou Vocational Institute of Mechatronic Technology        Nanjing University of Science and Technology
Exploiting Color Name Space for Salient Object Detection - Framework

Figure 2  Framework of the proposed CNS model.
Abstract

In this paper, we will investigate the contribution of color names for the task of salient object detection. An input image is first converted to color name space, which is consisted of 11 probabilistic channels. By exploiting a surroundedness cue, we obtain a saliency map through a linear combination of a set of sequential attention maps. To overcome the limitation of only using the surroundedness cue, two global cues with respect to color names are invoked to guide the computation of a weighted saliency map. Finally, we integrate the above two saliency maps into a unified framework to generate the final result. In addition, an improved post-processing procedure is introduced to effectively suppress image backgrounds while uniformly highlight salient objects. Experimental results show that the proposed model produces more accurate saliency maps and performs well against twenty-one saliency models in terms of three evaluation metrics on three public data sets.

Paper
Results
Exploiting Color Name Space for Salient Object Detection - Figure 9

Figure 9  Performance of the proposed model compared with twenty-one saliency models on ASD [24,2], ECSSD [48,40], and ImgSal [22,23], respectively. (a) Precision-Recall curves. (b) F_\beta-measure curves. (c) Precision-Recall bars.
Exploiting Color Name Space for Salient Object Detection - Figure 10

Figure 10  Visual comparison of salient object detection results. Top three rows, middle three rows, and bottom three rows are from ASD [24,2], ECSSD [48,40], and ImgSal [22,23], respectively. (a) Input images, and (b) ground truth masks. Saliency maps produced by using (c) the proposed CNS model, (d) RPC [25], (e) BMS [52], (f) FES [42], (g) GC [8], (h) HFT [23], (i) PCA [28], (j) RC [9], and (k) TLLT [13].
Acknowledgments

J. Lou is supported by the Changzhou Key Laboratory of Industrial Internet and Data Intelligence (No. CM20183002) and the QingLan Project of Jiangsu Province (2018). The work of L. Chen, F. Xu, W. Zhu, and M. Ren is supported by the National Natural Science Foundation of China (Nos. 61231014 and 61727802). H. Wang is supported by the National Defense Pre-research Foundation of China (No. 9140A01060115BQ02002) and the National Natural Science Foundation of China (No. 61703209). Q. Xia is supported by the National Natural Science Foundation of China (No. 61403202) and the China Postdoctoral Science Foundation (No. 2014M561654). The authors thank Andong Wang and Haiyang Zhang for helpful discussions regarding this manuscript.

References
6.
Borji A, Cheng MM, Jiang H, Li J (2015) Salient object detection: a benchmark. IEEE Trans Image Process 24(12):5706-5722
7.
Cheng MM, Zhang G, Mitra NJ, Huang X, Hu SM (2011) Global contrast based salient region detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 409-416
46.
van de Weijer J, Schmid C, Verbeek J (2007) Learning color names from real-world images. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1-8
52.
Zhang J, Sclaroff S (2013) Saliency detection: a Boolean map approach. In: Proceedings of the IEEE international conference on computer vision, pp 153-160
Latest update:  Sep 26, 2020