Skip to main content

Facial Recognition and Pathfinding on the Humanoid Robot Pepper as a Starting Point for Social Interaction

  • Chapter
  • First Online:
New Trends in Business Information Systems and Technology

Abstract

Interaction between humanoid robots and humans is a complex process. Speech, gestures, and recognition of communication partners are important aspects in a well-defined interaction. To seem more natural, a humanoid robot should not be stationary. It should be able to be part of a crowd and wander around a specific area. Therefore, pathfinding is important to give a humanoid robot the ability to connect with people in more than one place. In addition, the recognition of communication partners is the backbone of social interaction. This chapter demonstrates how OpenCV, a well-known computer vision library, supports the robot Pepper in the recognition of communication partners and in addition, how this is the starting point to different types of small talk as the basis for a prototypical interaction process of humanoid robots and humans. Additionally, the navigation functions that allow the robot to move autonomously and enable a better human–robot interaction will be discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robot. Autonom. Syst. 42, 143–166 (2003). https://doi.org/10.1016/S0921-8890(02)00372-X

    Article  MATH  Google Scholar 

  2. Hu, J., Edsinger, A., Lim, Y.-J., Donaldson, N., Solano, M., Solochek, A., Marchessault, R.: An advanced medical robotic system augmenting healthcare capabilities—robotic nursing assistant. In: 2011 IEEE International Conference on Robotics and Automation, pp. 6264–6269 (2011)

    Google Scholar 

  3. Nejat, G., Sun, Y., Nies, M.: Assistive robots in health care settings. Home Health Care Manage. Pract. 21, 177–187 (2009). https://doi.org/10.1177/1084822308325695

    Article  Google Scholar 

  4. Broekens, J., Heerink, M., Rosendal, H.: Assistive social robots in elderly care: a review. Gerontechnology 8 (2009)

    Google Scholar 

  5. Lim, M., Aylett, R.: Intelligent mobile tour guide. In: Symposium on Narrative AI and Intelligent Serious Games for Education, AISB ’07 (2007)

    Google Scholar 

  6. Han, B., Kim, Y., Cho, K., Yang, H.S.: Museum tour guide robot with augmented reality. In: 2010 16th International Conference on Virtual Systems and Multimedia, pp. 223–229 (2010)

    Google Scholar 

  7. Cho, S.T., Jung, S.: Control of a robotic vehicle for entertainment using vision for touring University Campus. In: 2014 14th International Conference on Control, Automation and Systems (ICCAS 2014), pp. 1415–1417 (2014)

    Google Scholar 

  8. Sidner, C.L., Dzikovska, M.: Human-robot interaction: engagement between humans and robots for hosting activities. In: Proceedings. Fourth IEEE International Conference on Multimodal Interfaces, pp. 123–128 (2002)

    Google Scholar 

  9. Breazeal, C., Scassellati, B.: How to build robots that make friends and influence people. In: Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), vol. 2, pp. 858–863 (1999)

    Google Scholar 

  10. Gorostiza, J.F., Barber, R., Khamis, A.M., Malfaz, M., Pacheco, R., Rivas, R., Corrales, A., Delgado, E., Salichs, M.A.: Multimodal human-robot interaction framework for a personal robot. In: ROMAN 2006—The 15th IEEE International Symposium on Robot and Human Interactive Communication, pp. 39–44 (2006)

    Google Scholar 

  11. Fischer, K.: Interpersonal variation in understanding robots as social actors. In: 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 53–60 (2011)

    Google Scholar 

  12. Breazeal, C.: Toward sociable robots. Robot. Autonom. Syst. 42, 167–175 (2003). https://doi.org/10.1016/S0921-8890(02)00373-1

    Article  MATH  Google Scholar 

  13. Pepper—Documentation—Aldebaran 2.5.11.14a documentation. http://doc.aldebaran.com/2-5/home_pepper.html

  14. OpenCV: OpenCV library. https://opencv.org/

  15. Vossen, P., Baez, S., Bajčetić, L., Kraaijeveld, B.: Leolani: a reference machine with a theory of mind for social communication. arXiv:1806.01526 [cs]. (2018)

  16. Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1991. Proceedings CVPR’91., pp. 586–591. IEEE (1991)

    Google Scholar 

  17. Sirovich, L., Kirby, M.: Low-dimensional procedure for the characterization of human faces. Josa a. 4, 519–524 (1987)

    Article  Google Scholar 

  18. Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. Yale University New Haven United States (1997)

    Google Scholar 

  19. Fisher, R.A.: The use of multiple measurements in taxonomic problems. Annals Eugenics. 7, 179–188 (1936). https://doi.org/10.1111/j.1469-1809.1936.tb02137.x

    Article  Google Scholar 

  20. Ahonen, T., Hadid, A., Pietikäinen, M.: Face recognition with local binary patterns. In: Pajdla, T., Matas, J. (eds.) Computer Vision—ECCV 2004, pp. 469–481. Springer, Berlin Heidelberg, Berlin, Heidelberg (2004)

    Chapter  Google Scholar 

  21. OpenCV: Face Recognition with OpenCV—OpenCV 2.4.13.7 documentation. https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html

  22. Definition of bright. https://www.merriam-webster.com/dictionary/bright

  23. Definition of contrast. https://www.merriam-webster.com/dictionary/contrast

  24. Saturation|Definition of Saturation by Merriam-Webster. https://www.merriam-webster.com/dictionary/saturation

  25. Definition of resolution. https://www.merriam-webster.com/dictionary/resolution

  26. Wang, H., Yu, Y., Yuan, Q.: Application of Dijkstra algorithm in robot path-planning. In: 2011 Second International Conference on Mechanic Automation and Control Engineering, pp. 1067–1069 (2011)

    Google Scholar 

  27. Niemelä, M., Arvola, A., Aaltonen, I.: Monitoring the acceptance of a social service robot in a shopping mall: first results. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 225–226. ACM, New York, NY, USA (2017)

    Google Scholar 

  28. Aaltonen, I., Arvola, A., Heikkilä, P., Lammi, H.: Hello pepper, may i tickle you?: children’s and adults’ responses to an entertainment robot at a shopping mall. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 53–54. ACM, New York, NY, USA (2017)

    Google Scholar 

  29. Khambhaita, H., Alami, R.: A human-robot cooperative navigation planner. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction—HRI ’17, pp. 161–162. ACM Press, Vienna, Austria (2017)

    Google Scholar 

  30. Kretzschmar, H., Spies, M., Sprunk, C., Burgard, W.: Socially compliant mobile robot navigation via inverse reinforcement learning. Int. J. Robot. Res. 35, 1289–1307 (2016). https://doi.org/10.1177/0278364915619772

    Article  Google Scholar 

  31. GitHub—aldebaran/naoqi_navigation_samples: contains samples for NAOqi navigation and exploration features. The samples are packaged into Choregraphe applications. https://github.com/aldebaran/naoqi_navigation_samples

  32. Lasers—Aldebaran 2.5.11.14a documentation. http://doc.aldebaran.com/2-5/family/pepper_technical/laser_pep.html

  33. Sonars—Aldebaran 2.5.11.14a documentation. http://doc.aldebaran.com/2-5/family/pepper_technical/sonar_pep.html

  34. Documentation—ROS Wiki. https://wiki.ros.org/

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Achim Dannecker .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Dannecker, A., Hertig, D. (2021). Facial Recognition and Pathfinding on the Humanoid Robot Pepper as a Starting Point for Social Interaction. In: Dornberger, R. (eds) New Trends in Business Information Systems and Technology. Studies in Systems, Decision and Control, vol 294. Springer, Cham. https://doi.org/10.1007/978-3-030-48332-6_10

Download citation

Publish with us

Policies and ethics