Skip to main content

Weighted Linear Multiple Kernel Learning for Saliency Detection

  • Conference paper
  • First Online:
2nd EAI International Conference on Robotic Sensor Networks

Abstract

This paper presents a novel saliency detection method based on weighted linear multiple kernel learning (WLMKL), which is able to adaptively combine different contrast measurements in a supervised manner. Three commonly used bottom-up visual saliency operations are first introduced, including corner-surround contrast (CSC), center-surround contrast (CESC), and global contrast (GC). Then these contrast measures are fed into our WLMKL framework to produce the final saliency map. We show that the assigned weights for each contrast feature maps are always normalized in our WLMKL formulation. In addition, the proposed approach benefits from the advantages of the contribution of each individual contrast operation, and thus produces more robust and accurate saliency maps. The extensive experimental results show the effectiveness of the proposed model, and demonstrate the combination is superior to individual subcomponent.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.di.ens.fr/willow/SPAMS/index.html.

References

  1. Achanta, R., Hemami, S., Estrada, F., & Süsstrunk, S. (2009). Frequency-tuned salient region detection. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1597–1604). Piscataway: IEEE.

    Chapter  Google Scholar 

  2. Alexe, B., Deselaers, T., & Ferrari, V. (2010) What is an object? In 2010 IEEE Computer Society conference on Computer Vision and Pattern Recognition (pp. 73–80). Piscataway: IEEE.

    Chapter  Google Scholar 

  3. Borji, A. (2012). Boosting bottom-up and top-down visual features for saliency estimation. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 438–445). Piscataway: IEEE.

    Chapter  Google Scholar 

  4. Bruce, N., & Tsotsos, J. (2006). Saliency based on information maximization. In Proceedings of the 18th International Conference on Neural Information Processing System (pp. 155–162). Cambridge, MA: MIT Press.

    Google Scholar 

  5. Cheng, M., Zhang, G., Mitra, N., Huang, X., & Hu, S. (2011). Global contrast based salient region detection. In Conference on Computer Vision and Pattern Recognition (pp. 409–416).

    Google Scholar 

  6. Goferman, S., Zelnik-Manor, L., & Tal, A. (2012). Context-aware saliency detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10), 1915–1926.

    Article  Google Scholar 

  7. Gönen, M., & Alpaydın, E. (2011). Multiple kernel learning algorithms. Journal of Machine Learning Research, 12, 2211–2268.

    MathSciNet  MATH  Google Scholar 

  8. Gopalakrishnan, V., Hu, Y., & Rajan, D. (2009). Salient region detection by modeling distributions of color and orientation. IEEE Transactions on Multimedia, 11(5), 892–905.

    Article  Google Scholar 

  9. Hou, X., & Zhang, L. (2007). Saliency detection: A spectral residual approach. In 2007 IEEE Conference on Computer Vision and Pattern Recognition (pp. 1–8). Piscataway: IEEE.

    Google Scholar 

  10. Hou, X., & Zhang, L. (2008). Dynamic visual attention: Searching for coding length increments. In Advances in Neural Information Processing Systems (pp. 681–688).

    Google Scholar 

  11. Hu, Y., Xie, X., Ma, W. Y., Chia, L. T., & Rajan, D. (2005). Salient region detection using weighted feature maps based on the human visual attention model. In Advances in Multimedia Information Processing-PCM (pp. 993–1000).

    Google Scholar 

  12. Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254–1259.

    Article  Google Scholar 

  13. Judd, T., Ehinger, K., Durand, F., & Torralba, A. (2009). Learning to predict where humans look. In 2009 IEEE 12th International Conference on Computer Vision (pp. 2106–2113). Piscataway: IEEE.

    Chapter  Google Scholar 

  14. Kruthiventi, S. S. S., Gudisa, V., Dholakiya, J. H., & Babu, R. V.: Saliency unified: A deep architecture for simultaneous eye fixation prediction and salient object segmentation. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 5781–5790). Piscataway: IEEE.

    Google Scholar 

  15. Li, J., Li, X., Yang, B., & Sun, X. (2015). Segmentation-based image copy-move forgery detection scheme. IEEE Transactions on Information Forensics and Security, 10(3), 507–518.

    Article  Google Scholar 

  16. Lu, H., Zhang, X., Qi, J., Tong, N., Ruan, X., & Yang, M. H. (2017). Co-bootstrapping saliency. IEEE Transactions on Image Processing, 26(1), 414–425.

    Article  MathSciNet  Google Scholar 

  17. Ma, Q., & Zhang, L. (2008). Image quality assessment with visual attention. In 2008 19th International Conference on Pattern Recognition (pp. 1–4). Piscataway: IEEE.

    Google Scholar 

  18. Mairal, J., Bach, F., Ponce, J., & Sapiro, G. (2010). Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research, 11, 19–60 (2010)

    MathSciNet  MATH  Google Scholar 

  19. Marchesotti, L., Cifarelli, C., G., & Gabriela, C. (2009). A framework for visual saliency detection with applications to image thumbnailing. In 2009 IEEE 12th International Conference on Computer Vision (pp. 2232–2239).

    Google Scholar 

  20. Olshausen, B. A., & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583), 607–609.

    Article  Google Scholar 

  21. Pan, Z., Zhang, Y., & Kwong, S. (2015). Efficient motion and disparity estimation optimization for low complexity multiview video coding. IEEE Transactions on Broadcasting, 61(2), 166–176.

    Article  Google Scholar 

  22. Shen, X., & Wu, Y. (2012). A unified approach to salient object detection via low rank matrix recovery. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp. 853–860). Piscataway: IEEE.

    Chapter  Google Scholar 

  23. Simoncelli, E. P., & Olshausen, B. A. (2001). Natural image statistics and neural representation. Annual Review of Neuroscience, 24(1), 1193–1216.

    Article  Google Scholar 

  24. Treisman, A. M., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12(1), 97–136.

    Article  Google Scholar 

  25. Vapnik, V. (1993). The Nature of Statistical Learning Theory. Berlin: Springer.

    MATH  Google Scholar 

  26. Yu, J. G., Xia, G. S., Gao, C., & Samal, A. (2016). A computational model for object-based visual saliency: Spreading attention along gestalt cues. IEEE Transactions on Multimedia, 18(2), 273–286.

    Article  Google Scholar 

  27. Zhang, L., Tong, M. H., Marks, T. K., Shan, H., & Cottrell, G. W. (2008). SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision, 8(7), 32.

    Article  Google Scholar 

  28. Zhou, Q. (2014). Object-based attention: saliency detection using contrast via background prototypes. Electronics Letters, 50(14), 997–999.

    Article  Google Scholar 

  29. Zhou, Q., Li, N., Yang, Y., Chen, P., & Liu, W. (2012). Corner-surround contrast for saliency detection. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012) (pp. 1423–1426). Piscataway: IEEE.

    Google Scholar 

  30. Zhou, Q., Chen, J., Ren, S., Zhou, Y., Chen, J., & Liu, W. (2013). On contrast combinations for visual saliency detection. In 2013 IEEE International Conference on Image Processing (pp. 2665–2669). Piscataway: IEEE.

    Chapter  Google Scholar 

  31. Zhou, Q., Cai, S., Zhu, S., & Zheng, B. (2014). Salient object detection using window mask transferring with multi-layer background contrast. In Asian Conference on Computer Vision (pp. 221–235). Cham: Springer.

    Google Scholar 

Download references

Acknowledgements

This work was partly supported by the National Science Foundation (Grant No. IIS-1302164), the National Natural Science Foundation of China (Grant No. 61881240048, 61571240, 61501247, 61501259, 61671253), China Postdoctoral Science Foundation (Grant No. 2015M581841), Open Fund Project of Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education (Nanjing University of Science and Technology) (Grant No. JYB201709, JYB201710), and Natural Science Foundation of Jiangsu Province, China (BK20160908), NUPTSF (Grant No. NY214139).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Quan Zhou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, Q. et al. (2020). Weighted Linear Multiple Kernel Learning for Saliency Detection. In: Lu, H., Yujie, L. (eds) 2nd EAI International Conference on Robotic Sensor Networks. EAI/Springer Innovations in Communication and Computing. Springer, Cham. https://doi.org/10.1007/978-3-030-17763-8_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-17763-8_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-17762-1

  • Online ISBN: 978-3-030-17763-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics