Abstract
This paper presents a novel saliency detection method based on weighted linear multiple kernel learning (WLMKL), which is able to adaptively combine different contrast measurements in a supervised manner. Three commonly used bottom-up visual saliency operations are first introduced, including corner-surround contrast (CSC), center-surround contrast (CESC), and global contrast (GC). Then these contrast measures are fed into our WLMKL framework to produce the final saliency map. We show that the assigned weights for each contrast feature maps are always normalized in our WLMKL formulation. In addition, the proposed approach benefits from the advantages of the contribution of each individual contrast operation, and thus produces more robust and accurate saliency maps. The extensive experimental results show the effectiveness of the proposed model, and demonstrate the combination is superior to individual subcomponent.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Achanta, R., Hemami, S., Estrada, F., & Süsstrunk, S. (2009). Frequency-tuned salient region detection. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1597–1604). Piscataway: IEEE.
Alexe, B., Deselaers, T., & Ferrari, V. (2010) What is an object? In 2010 IEEE Computer Society conference on Computer Vision and Pattern Recognition (pp. 73–80). Piscataway: IEEE.
Borji, A. (2012). Boosting bottom-up and top-down visual features for saliency estimation. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 438–445). Piscataway: IEEE.
Bruce, N., & Tsotsos, J. (2006). Saliency based on information maximization. In Proceedings of the 18th International Conference on Neural Information Processing System (pp. 155–162). Cambridge, MA: MIT Press.
Cheng, M., Zhang, G., Mitra, N., Huang, X., & Hu, S. (2011). Global contrast based salient region detection. In Conference on Computer Vision and Pattern Recognition (pp. 409–416).
Goferman, S., Zelnik-Manor, L., & Tal, A. (2012). Context-aware saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10), 1915–1926.
Gönen, M., & Alpaydın, E. (2011). Multiple kernel learning algorithms. Journal of Machine Learning Research, 12, 2211–2268.
Gopalakrishnan, V., Hu, Y., & Rajan, D. (2009). Salient region detection by modeling distributions of color and orientation. IEEE Transactions on Multimedia, 11(5), 892–905.
Hou, X., & Zhang, L. (2007). Saliency detection: A spectral residual approach. In 2007 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–8). Piscataway: IEEE.
Hou, X., & Zhang, L. (2008). Dynamic visual attention: Searching for coding length increments. In Advances in Neural Information Processing Systems (pp. 681–688).
Hu, Y., Xie, X., Ma, W. Y., Chia, L. T., & Rajan, D. (2005). Salient region detection using weighted feature maps based on the human visual attention model. In Advances in Multimedia Information Processing-PCM (pp. 993–1000).
Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254–1259.
Judd, T., Ehinger, K., Durand, F., & Torralba, A. (2009). Learning to predict where humans look. In 2009 IEEE 12th International Conference on Computer Vision (pp. 2106–2113). Piscataway: IEEE.
Kruthiventi, S. S. S., Gudisa, V., Dholakiya, J. H., & Babu, R. V.: Saliency unified: A deep architecture for simultaneous eye fixation prediction and salient object segmentation. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5781–5790). Piscataway: IEEE.
Li, J., Li, X., Yang, B., & Sun, X. (2015). Segmentation-based image copy-move forgery detection scheme. IEEE Transactions on Information Forensics and Security, 10(3), 507–518.
Lu, H., Zhang, X., Qi, J., Tong, N., Ruan, X., & Yang, M. H. (2017). Co-bootstrapping saliency. IEEE Transactions on Image Processing, 26(1), 414–425.
Ma, Q., & Zhang, L. (2008). Image quality assessment with visual attention. In 2008 19th International Conference on Pattern Recognition (pp. 1–4). Piscataway: IEEE.
Mairal, J., Bach, F., Ponce, J., & Sapiro, G. (2010). Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research, 11, 19–60 (2010)
Marchesotti, L., Cifarelli, C., G., & Gabriela, C. (2009). A framework for visual saliency detection with applications to image thumbnailing. In 2009 IEEE 12th International Conference on Computer Vision (pp. 2232–2239).
Olshausen, B. A., & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583), 607–609.
Pan, Z., Zhang, Y., & Kwong, S. (2015). Efficient motion and disparity estimation optimization for low complexity multiview video coding. IEEE Transactions on Broadcasting, 61(2), 166–176.
Shen, X., & Wu, Y. (2012). A unified approach to salient object detection via low rank matrix recovery. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 853–860). Piscataway: IEEE.
Simoncelli, E. P., & Olshausen, B. A. (2001). Natural image statistics and neural representation. Annual Review of Neuroscience, 24(1), 1193–1216.
Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12(1), 97–136.
Vapnik, V. (1993). The Nature of Statistical Learning Theory. Berlin: Springer.
Yu, J. G., Xia, G. S., Gao, C., & Samal, A. (2016). A computational model for object-based visual saliency: Spreading attention along gestalt cues. IEEE Transactions on Multimedia, 18(2), 273–286.
Zhang, L., Tong, M. H., Marks, T. K., Shan, H., & Cottrell, G. W. (2008). SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision, 8(7), 32.
Zhou, Q. (2014). Object-based attention: saliency detection using contrast via background prototypes. Electronics Letters, 50(14), 997–999.
Zhou, Q., Li, N., Yang, Y., Chen, P., & Liu, W. (2012). Corner-surround contrast for saliency detection. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012) (pp. 1423–1426). Piscataway: IEEE.
Zhou, Q., Chen, J., Ren, S., Zhou, Y., Chen, J., & Liu, W. (2013). On contrast combinations for visual saliency detection. In 2013 IEEE International Conference on Image Processing (pp. 2665–2669). Piscataway: IEEE.
Zhou, Q., Cai, S., Zhu, S., & Zheng, B. (2014). Salient object detection using window mask transferring with multi-layer background contrast. In Asian Conference on Computer Vision (pp. 221–235). Cham: Springer.
Acknowledgements
This work was partly supported by the National Science Foundation (Grant No. IIS-1302164), the National Natural Science Foundation of China (Grant No. 61881240048, 61571240, 61501247, 61501259, 61671253), China Postdoctoral Science Foundation (Grant No. 2015M581841), Open Fund Project of Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education (Nanjing University of Science and Technology) (Grant No. JYB201709, JYB201710), and Natural Science Foundation of Jiangsu Province, China (BK20160908), NUPTSF (Grant No. NY214139).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhou, Q. et al. (2020). Weighted Linear Multiple Kernel Learning for Saliency Detection. In: Lu, H., Yujie, L. (eds) 2nd EAI International Conference on Robotic Sensor Networks. EAI/Springer Innovations in Communication and Computing. Springer, Cham. https://doi.org/10.1007/978-3-030-17763-8_19
Download citation
DOI: https://doi.org/10.1007/978-3-030-17763-8_19
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-17762-1
Online ISBN: 978-3-030-17763-8
eBook Packages: EngineeringEngineering (R0)