Advertisement

Secondary Filter Keyframes Extraction Algorithm Based on Adaptive Top-K

  • Yan Fu
  • Chunlin Xu
  • Mei Wang
Conference paper
Part of the EAI/Springer Innovations in Communication and Computing book series (EAISICC)

Abstract

As the coal mine environment is similar to night-time, there is less discernible information, which makes the coal mine video images collected by the camera have a high level of redundancy, less available information, obvious light spots, and noise interference, which are not conducive to extracting useful information from the video. In view of the above problems, a keyframes extraction algorithm for coal mine video images based on a secondary filter with adaptive Top-K is proposed. The algorithm calculates the eigenvalues of the feature points using the principal component analysis method, then filters the eigenvalues by the threshold of adaptive Top-K to extract the effective keyframes of the coal mine image. The experimental results show that the algorithm can extract the keyframes more accurately using the adaptive threshold method.

Keywords

Eigenvalues Top-K PCA Adaptive Secondary filtration 

References

  1. 1.
    Momin, B. F., & Rupnar, G. B. (2016). Keyframe extraction in surveillance video using correlation. In 2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT) (pp. 276–280). Piscataway: IEEE.CrossRefGoogle Scholar
  2. 2.
    Xu, H., & He, Y. (2016). A video image preprocessing method for underground coal mine monitoring. Chinese Journal of Industry and Mine Automation, 42(1), 32–34.MathSciNetGoogle Scholar
  3. 3.
    Lin, Y.-C., & Lian, F.-L. (2014). Data reduction based on keyframe with motion energy extraction rules. In Proceeding of the IEEE International Conference on Information and Automation Hailar, 2014 (pp. 507–512). Piscataway: IEEE.Google Scholar
  4. 4.
    Guan, G., Wang, Z., Lu, S., Da Deng, J., & Feng, D. D. (2013). Keypoint-based keyframe selection. IEEE Transactions on Circuits and Systems for Video Technology, 23(4), 729–734.CrossRefGoogle Scholar
  5. 5.
    Sharma, C., & Sathish, P. K. (2015). Video content and structure description based on keyframes, clusters and storyboards. In 2015 IEEE 17th International Workshop on Multimedia Signal Processing (MMSP) (pp. 245–249). Piscataway: IEEE.Google Scholar
  6. 6.
    Liu, Z., He, S., Hu, W., & Li, Z. (2017). Video sequence moving target detection based on background subtraction. Chinese Journal of Computer Application, 37(6), 1777–1781.Google Scholar
  7. 7.
    Jacques, J. C. S., Jr., Jung, C. R., & Musse, S. R. (2005). Background subtraction and shadow detection in grayscale video sequences. In Proceedings of the XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI’05) (pp. 1530–1834/05). Piscataway: IEEE.Google Scholar
  8. 8.
    Feng, W., & Liu, B. (2017). Improved SIFT algorithm image matching research. Chinese Journal of Computer Engineering and Application, 1(1), 1–12.Google Scholar
  9. 9.
    Barbieri, T. T. d. S., & Goularte, R. (2014). KS-SIFT: A keyframe extraction method based on local features. In 2015 International Conference on Industrial Instrumentation and Control (IClC) (pp. 13--17). Piscataway: IEEE.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Yan Fu
    • 1
  • Chunlin Xu
    • 1
  • Mei Wang
    • 1
  1. 1.School of Computer Science and Technology, School of Electrical and Control Engineering, Xi’an University of Science and TechnologyXi’anPeople’s Republic of China

Personalised recommendations