Abstract
Interaction between humanoid robots and humans is a complex process. Speech, gestures, and recognition of communication partners are important aspects in a well-defined interaction. To seem more natural, a humanoid robot should not be stationary. It should be able to be part of a crowd and wander around a specific area. Therefore, pathfinding is important to give a humanoid robot the ability to connect with people in more than one place. In addition, the recognition of communication partners is the backbone of social interaction. This chapter demonstrates how OpenCV, a well-known computer vision library, supports the robot Pepper in the recognition of communication partners and in addition, how this is the starting point to different types of small talk as the basis for a prototypical interaction process of humanoid robots and humans. Additionally, the navigation functions that allow the robot to move autonomously and enable a better human–robot interaction will be discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Fong, T., Nourbakhsh, I., Dautenhahn, K.: A survey of socially interactive robots. Robot. Autonom. Syst. 42, 143–166 (2003). https://doi.org/10.1016/S0921-8890(02)00372-X
Hu, J., Edsinger, A., Lim, Y.-J., Donaldson, N., Solano, M., Solochek, A., Marchessault, R.: An advanced medical robotic system augmenting healthcare capabilities—robotic nursing assistant. In: 2011 IEEE International Conference on Robotics and Automation, pp. 6264–6269 (2011)
Nejat, G., Sun, Y., Nies, M.: Assistive robots in health care settings. Home Health Care Manage. Pract. 21, 177–187 (2009). https://doi.org/10.1177/1084822308325695
Broekens, J., Heerink, M., Rosendal, H.: Assistive social robots in elderly care: a review. Gerontechnology 8 (2009)
Lim, M., Aylett, R.: Intelligent mobile tour guide. In: Symposium on Narrative AI and Intelligent Serious Games for Education, AISB ’07 (2007)
Han, B., Kim, Y., Cho, K., Yang, H.S.: Museum tour guide robot with augmented reality. In: 2010 16th International Conference on Virtual Systems and Multimedia, pp. 223–229 (2010)
Cho, S.T., Jung, S.: Control of a robotic vehicle for entertainment using vision for touring University Campus. In: 2014 14th International Conference on Control, Automation and Systems (ICCAS 2014), pp. 1415–1417 (2014)
Sidner, C.L., Dzikovska, M.: Human-robot interaction: engagement between humans and robots for hosting activities. In: Proceedings. Fourth IEEE International Conference on Multimodal Interfaces, pp. 123–128 (2002)
Breazeal, C., Scassellati, B.: How to build robots that make friends and influence people. In: Proceedings 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human and Environment Friendly Robots with High Intelligence and Emotional Quotients (Cat. No.99CH36289), vol. 2, pp. 858–863 (1999)
Gorostiza, J.F., Barber, R., Khamis, A.M., Malfaz, M., Pacheco, R., Rivas, R., Corrales, A., Delgado, E., Salichs, M.A.: Multimodal human-robot interaction framework for a personal robot. In: ROMAN 2006—The 15th IEEE International Symposium on Robot and Human Interactive Communication, pp. 39–44 (2006)
Fischer, K.: Interpersonal variation in understanding robots as social actors. In: 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 53–60 (2011)
Breazeal, C.: Toward sociable robots. Robot. Autonom. Syst. 42, 167–175 (2003). https://doi.org/10.1016/S0921-8890(02)00373-1
Pepper—Documentation—Aldebaran 2.5.11.14a documentation. http://doc.aldebaran.com/2-5/home_pepper.html
OpenCV: OpenCV library. https://opencv.org/
Vossen, P., Baez, S., Bajčetić, L., Kraaijeveld, B.: Leolani: a reference machine with a theory of mind for social communication. arXiv:1806.01526 [cs]. (2018)
Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1991. Proceedings CVPR’91., pp. 586–591. IEEE (1991)
Sirovich, L., Kirby, M.: Low-dimensional procedure for the characterization of human faces. Josa a. 4, 519–524 (1987)
Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. Yale University New Haven United States (1997)
Fisher, R.A.: The use of multiple measurements in taxonomic problems. Annals Eugenics. 7, 179–188 (1936). https://doi.org/10.1111/j.1469-1809.1936.tb02137.x
Ahonen, T., Hadid, A., Pietikäinen, M.: Face recognition with local binary patterns. In: Pajdla, T., Matas, J. (eds.) Computer Vision—ECCV 2004, pp. 469–481. Springer, Berlin Heidelberg, Berlin, Heidelberg (2004)
OpenCV: Face Recognition with OpenCV—OpenCV 2.4.13.7 documentation. https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html
Definition of bright. https://www.merriam-webster.com/dictionary/bright
Definition of contrast. https://www.merriam-webster.com/dictionary/contrast
Saturation|Definition of Saturation by Merriam-Webster. https://www.merriam-webster.com/dictionary/saturation
Definition of resolution. https://www.merriam-webster.com/dictionary/resolution
Wang, H., Yu, Y., Yuan, Q.: Application of Dijkstra algorithm in robot path-planning. In: 2011 Second International Conference on Mechanic Automation and Control Engineering, pp. 1067–1069 (2011)
Niemelä, M., Arvola, A., Aaltonen, I.: Monitoring the acceptance of a social service robot in a shopping mall: first results. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 225–226. ACM, New York, NY, USA (2017)
Aaltonen, I., Arvola, A., Heikkilä, P., Lammi, H.: Hello pepper, may i tickle you?: children’s and adults’ responses to an entertainment robot at a shopping mall. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, pp. 53–54. ACM, New York, NY, USA (2017)
Khambhaita, H., Alami, R.: A human-robot cooperative navigation planner. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction—HRI ’17, pp. 161–162. ACM Press, Vienna, Austria (2017)
Kretzschmar, H., Spies, M., Sprunk, C., Burgard, W.: Socially compliant mobile robot navigation via inverse reinforcement learning. Int. J. Robot. Res. 35, 1289–1307 (2016). https://doi.org/10.1177/0278364915619772
GitHub—aldebaran/naoqi_navigation_samples: contains samples for NAOqi navigation and exploration features. The samples are packaged into Choregraphe applications. https://github.com/aldebaran/naoqi_navigation_samples
Lasers—Aldebaran 2.5.11.14a documentation. http://doc.aldebaran.com/2-5/family/pepper_technical/laser_pep.html
Sonars—Aldebaran 2.5.11.14a documentation. http://doc.aldebaran.com/2-5/family/pepper_technical/sonar_pep.html
Documentation—ROS Wiki. https://wiki.ros.org/
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Dannecker, A., Hertig, D. (2021). Facial Recognition and Pathfinding on the Humanoid Robot Pepper as a Starting Point for Social Interaction. In: Dornberger, R. (eds) New Trends in Business Information Systems and Technology. Studies in Systems, Decision and Control, vol 294. Springer, Cham. https://doi.org/10.1007/978-3-030-48332-6_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-48332-6_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-48331-9
Online ISBN: 978-3-030-48332-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)