Abstract
Intersections pose great challenges to blind or visually impaired travelers who aim to cross roads safely and efficiently given unpredictable traffic control. Due to decreases in vision and increasingly difficult odds when planning and negotiating dynamic environments, visually impaired travelers require devices and/or assistance (i.e. cane, talking signals) to successfully execute intersection navigation. The proposed research project is to develop a novel computer vision-based approach, named Cross-Safe, that provides accurate and accessible guidance to the visually impaired as one crosses intersections, as part of a larger unified smart wearable device. As a first step, we focused on the red-light-green-light, go-no-go problem, as accessible pedestrian signals are drastically missing from urban infrastructure in New York City. Cross-Safe leverages state-of-the-art deep learning techniques for real-time pedestrian signal detection and recognition. A portable GPU unit, the Nvidia Jetson TX2, provides mobile visual computing and a cognitive assistant provides accurate voice-based guidance. More specifically, a lighter recognition algorithm was developed and equipped for Cross-Safe, enabling robust walking signal sign detection and signal recognition. Recognized signals are conveyed to visually impaired end user by vocal guidance, providing critical information for real-time intersection navigation. Cross-Safe is also able to balance portability, recognition accuracy, computing efficiency and power consumption. A custom image library was built and developed to train, validate, and test our methodology on real traffic intersections, demonstrating the feasibility of Cross-Safe in providing safe guidance to the visually impaired at urban intersections. Subsequently, experimental results show robust preliminary findings of our detection and recognition algorithm.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Resources, N.: NYC resources. http://www.nyc.gov/html/dot/html/infrastructure/accessiblepedsignals.shtml. Accessed 22 Apr 2018
APS: Accessible pedestrian signal features. http://accessforblind.org/aps/aps-features/. Accessed 22 Apr 2018
HAWK: Hawk pedestrian signal guide. https://ddot.dc.gov/sites/default/files/dc/sites/ddot/publication/attachments/dc_hawk_pedestrian_signal_guide.pdf. Accessed 22 June 2018
Crandall, W., Bentzen, B.L., Myers, L., Brabyn, J.: New orientation and accessibility option for persons with visual impairment: transportation applications for remote infrared audible signage. Clin. Exp. OPTOMETRY 84(3), 120–131 (2001)
Lookaround: Lookaround turorial. https://getlookaround.com/. Accessed 22 Apr 2018
Liao, C.F.: Using a smartphone app to assist the visually impaired at signalized intersections (2012)
Utcke, S.: Grouping based on projective geometry constraints and uncertainty. In: Sixth International Conference on Computer Vision, 1998, pp. 739–746. IEEE (1998)
Uddin, M.S., Shioyama, T.: Bipolarity and projective invariant-based zebra-crossing detection for the visually impaired. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, 2005. CVPR Workshops, pp. 22–22. IEEE (2005)
Se, S.: Zebra-crossing detection for the partially sighted. In: IEEE Conference on Computer Vision and Pattern Recognition, 2000, Proceedings, vol. 2, pp. 211–217. IEEE (2000)
Chung, Y.C., Wang, J.M., Chen, S.W.: A vision-based traffic light detection system at intersections. J. Taiwan Normal Univ. Math. Sci. Technol. 47(1), 67–86 (2002)
Aranda, J., Mares, P.: Visual system to help blind people to cross the street. In: International Conference on Computers for Handicapped Persons, pp. 454–461. Springer (2004)
Se, S., Brady, M.: Road feature detection and estimation. Mach. Vis. Appl. 14(3), 157–165 (2003)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)
Girshick, R.: Fast R-CNN. arXiv preprint arXiv:1504.08083 (2015)
Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)
Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: SSD: single shot multibox detector. In: European Conference on Computer Vision, pp. 21–37. Springer (2016)
Niu, L., Qian, C., Rizzo, J.R., Hudson, T., Li, Z., Enright, S., Sperling, E., Conti, K., Wong, E., Fang, Y.: A wearable assistive technology for the visually impaired with door knob detection and real-time feedback for hand-to-handle manipulation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1500–1508 (2017)
Freund, Y., Schapire, R.E., et al.: Experiments with a new boosting algorithm. In: ICML, vol. 96, pp. 148–156, Bari, Italy (1996)
Chollet, F., et al.: Keras (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Li, X., Cui, H., Rizzo, JR., Wong, E., Fang, Y. (2020). Cross-Safe: A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired. In: Arai, K., Kapoor, S. (eds) Advances in Computer Vision. CVC 2019. Advances in Intelligent Systems and Computing, vol 944. Springer, Cham. https://doi.org/10.1007/978-3-030-17798-0_13
Download citation
DOI: https://doi.org/10.1007/978-3-030-17798-0_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-17797-3
Online ISBN: 978-3-030-17798-0
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)