Abstract
This research provides two main changes based on Detect-And-Track. To improve the Multi-Object Tracking Accuracy (MOTA) while keeping the lightweight of the original approach, this paper proposes a gradient approach to obtain higher MOTA. We use the location of two previous frames of the same identified person to calculate the gradient for the location prediction of the current frame. Then, the predicted and the detected locations are compared. We also compare the current and previous detections. With a weighted combination for matching, we increase the MOTA score and improve the results of Detect-And-Track. Moreover, this research replaces cosine distance, the original feature extractor, with Euclidean distance. By doing so, feature extraction can match Intersection over Union (IoU) better. The weighted combination, which consists of IoU and Euclidean distance, provides a better MOTA than Detect-And-Track. In addition, a greedy approach facilitates a higher MOTA when implement with IoU and Euclidean distance. This weighted combination utility is superior than the combination of IoU and cosine distance, achieving 56.1% MOTA in total on the validation data of PoseTrack ICCV’17 dataset.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bishop, G., Welch, G.: An Introduction to the Kalman Filter, p. 80 (2001)
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20, 273–297 (1995)
Girdhar, R., Gkioxari, G., Torresani, L., Paluri, M., Tran, D.: Detect-and-track: efficient pose estimation in videos. In: CVPR (2018)
Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision (2015)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (2014)
He, K., Gkioxari, G., Dollar, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision (2017)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., Keutzer, K.: SqueezeNet: alexnet-level accuracy with \(50\times \) fewer parameters and \(<\) 0.5 Mb model size. In: ICLR (2017)
Iqbal, U., Milan, A., Gall, J.: PoseTrack: joint multi-person pose estimation and tracking. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017). https://arxiv.org/abs/1611.07727
Kuhn, H.W.: The Hungarian method for the assignment problem. Nav. Res. Logist. Q. 2(1–2), 83–97 (1955). https://doi.org/10.1002/nav.3800020109
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: 2016 YOLO You only look once: unified, real-time object detection. In: CVPR (2016)
Redmon, J., Farhadi, A.: YOLO9000: Better, faster, stronger. In: Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017 (2017)
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (ICRL) (2015)
Uijlings, J.R., Van De Sande, K.E., Gevers, T., Smeulders, A.W.: Selective search for object recognition. Int. J. Comput. Vis. 104, 154–171 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Wen, SY., Yen, Y., Chen, A.Y. (2020). Human Tracking for Facility Surveillance. In: Arai, K., Kapoor, S. (eds) Advances in Computer Vision. CVC 2019. Advances in Intelligent Systems and Computing, vol 944. Springer, Cham. https://doi.org/10.1007/978-3-030-17798-0_27
Download citation
DOI: https://doi.org/10.1007/978-3-030-17798-0_27
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-17797-3
Online ISBN: 978-3-030-17798-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)