Skip to main content

Cross-Safe: A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired

  • Conference paper
  • First Online:

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 944))

Abstract

Intersections pose great challenges to blind or visually impaired travelers who aim to cross roads safely and efficiently given unpredictable traffic control. Due to decreases in vision and increasingly difficult odds when planning and negotiating dynamic environments, visually impaired travelers require devices and/or assistance (i.e. cane, talking signals) to successfully execute intersection navigation. The proposed research project is to develop a novel computer vision-based approach, named Cross-Safe, that provides accurate and accessible guidance to the visually impaired as one crosses intersections, as part of a larger unified smart wearable device. As a first step, we focused on the red-light-green-light, go-no-go problem, as accessible pedestrian signals are drastically missing from urban infrastructure in New York City. Cross-Safe leverages state-of-the-art deep learning techniques for real-time pedestrian signal detection and recognition. A portable GPU unit, the Nvidia Jetson TX2, provides mobile visual computing and a cognitive assistant provides accurate voice-based guidance. More specifically, a lighter recognition algorithm was developed and equipped for Cross-Safe, enabling robust walking signal sign detection and signal recognition. Recognized signals are conveyed to visually impaired end user by vocal guidance, providing critical information for real-time intersection navigation. Cross-Safe is also able to balance portability, recognition accuracy, computing efficiency and power consumption. A custom image library was built and developed to train, validate, and test our methodology on real traffic intersections, demonstrating the feasibility of Cross-Safe in providing safe guidance to the visually impaired at urban intersections. Subsequently, experimental results show robust preliminary findings of our detection and recognition algorithm.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Resources, N.: NYC resources. http://www.nyc.gov/html/dot/html/infrastructure/accessiblepedsignals.shtml. Accessed 22 Apr 2018

  2. APS: Accessible pedestrian signal features. http://accessforblind.org/aps/aps-features/. Accessed 22 Apr 2018

  3. HAWK: Hawk pedestrian signal guide. https://ddot.dc.gov/sites/default/files/dc/sites/ddot/publication/attachments/dc_hawk_pedestrian_signal_guide.pdf. Accessed 22 June 2018

  4. Crandall, W., Bentzen, B.L., Myers, L., Brabyn, J.: New orientation and accessibility option for persons with visual impairment: transportation applications for remote infrared audible signage. Clin. Exp. OPTOMETRY 84(3), 120–131 (2001)

    Article  Google Scholar 

  5. Lookaround: Lookaround turorial. https://getlookaround.com/. Accessed 22 Apr 2018

  6. Liao, C.F.: Using a smartphone app to assist the visually impaired at signalized intersections (2012)

    Google Scholar 

  7. Utcke, S.: Grouping based on projective geometry constraints and uncertainty. In: Sixth International Conference on Computer Vision, 1998, pp. 739–746. IEEE (1998)

    Google Scholar 

  8. Uddin, M.S., Shioyama, T.: Bipolarity and projective invariant-based zebra-crossing detection for the visually impaired. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, 2005. CVPR Workshops, pp. 22–22. IEEE (2005)

    Google Scholar 

  9. Se, S.: Zebra-crossing detection for the partially sighted. In: IEEE Conference on Computer Vision and Pattern Recognition, 2000, Proceedings, vol. 2, pp. 211–217. IEEE (2000)

    Google Scholar 

  10. Chung, Y.C., Wang, J.M., Chen, S.W.: A vision-based traffic light detection system at intersections. J. Taiwan Normal Univ. Math. Sci. Technol. 47(1), 67–86 (2002)

    Google Scholar 

  11. Aranda, J., Mares, P.: Visual system to help blind people to cross the street. In: International Conference on Computers for Handicapped Persons, pp. 454–461. Springer (2004)

    Google Scholar 

  12. Se, S., Brady, M.: Road feature detection and estimation. Mach. Vis. Appl. 14(3), 157–165 (2003)

    Article  Google Scholar 

  13. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

    Google Scholar 

  14. Girshick, R.: Fast R-CNN. arXiv preprint arXiv:1504.08083 (2015)

  15. Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)

    Google Scholar 

  16. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

    Google Scholar 

  17. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: SSD: single shot multibox detector. In: European Conference on Computer Vision, pp. 21–37. Springer (2016)

    Google Scholar 

  18. Niu, L., Qian, C., Rizzo, J.R., Hudson, T., Li, Z., Enright, S., Sperling, E., Conti, K., Wong, E., Fang, Y.: A wearable assistive technology for the visually impaired with door knob detection and real-time feedback for hand-to-handle manipulation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1500–1508 (2017)

    Google Scholar 

  19. Freund, Y., Schapire, R.E., et al.: Experiments with a new boosting algorithm. In: ICML, vol. 96, pp. 148–156, Bari, Italy (1996)

    Google Scholar 

  20. Chollet, F., et al.: Keras (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yi Fang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, X., Cui, H., Rizzo, JR., Wong, E., Fang, Y. (2020). Cross-Safe: A Computer Vision-Based Approach to Make All Intersection-Related Pedestrian Signals Accessible for the Visually Impaired. In: Arai, K., Kapoor, S. (eds) Advances in Computer Vision. CVC 2019. Advances in Intelligent Systems and Computing, vol 944. Springer, Cham. https://doi.org/10.1007/978-3-030-17798-0_13

Download citation

Publish with us

Policies and ethics