Abstract
To realize the full benefit from autonomy, systems will have to react to unknown events and uncertain dynamic environments. The resulting number of behaviors is essentially infinite; thus, the system is effectively non-deterministic but an operator needs to understand and trust the actions of the autonomous vehicles. This research began to tackle non-deterministic systems and trust by beginning to develop a user trust function based on intent information displayed and the prescribed bounds on allowable behaviors/actions of the non-deterministic system. Linear regression shows promise on being able to predict a person’s confidence of the machine’s prediction. Linear regression techniques indicated that subject characteristics, scenario difficulty, the experience with the system, and confidence earlier in the scenario account for approximately 60% of the variation in confidence ratings. This paper details the specifics of the liner regression model – essentially a trust function – for predicting a person’s confidence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The use of trademarks or names of manufacturers in this report is for accurate reporting and does not constitute an official endorsement, either expressed or implied, of such products or manufacturers by the National Aeronautics and Space Administration.
References
Chua, L.O.: Chua circuit. Scholarpedia 2, 1488 (2007)
Chua, L.O., Wu, C.W., Huang, A., Zhong, G.-Q.: A universal circuit for studying and generating chaos-Part I: routes to chaos. IEEE Trans. Circ. Syst. I Fundam. Theor. Appl. 40, 13 (1993)
Beller, J., Heesen, M., Vollrath, M.: Improving the driver-automation interaction. Hum. Factors J. Hum. Factors Ergon. Soc. 55, 11 (2013)
McGuirl, J.M., Sarter, N.B.: Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Hum. Factors J. Hum. Factors Ergon. Soc. 48, 10 (2006)
Verberne, F.M.F., Ham, J., Midden, C.J.H.: Trust in smart systems. Hum. Factors J. Hum. Factors Ergon. Soc. 54, 11 (2012)
Couch, L.L., Jones, W.H.: Measuring levels of trust. J. Res. Pers. 31, 18 (1997)
Jian, J.-Y., Bizantz, A.M., Drury, C.G.: Foundations for an empirically determined scale of trust in automated systems. Int. J. Cogn. Ergon. 4, 16 (2000)
Hoff, K.A., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Factors J. Hum. Factors Ergon. Soc. 57, 407–434 (2015)
Schaeffer, K.E.: The perception and measurement of human-robot trust. Doctor of Philosophy, p. 359, Department of Modeling and Simulation in the College of Sciences, University of Central Florida, Orlando, Florida (2013)
Boyce, M.W., Chen, J.Y.C., Selkowitz, A.R., Lakmani, S.G.: Effects of agent transparency on operator trust. In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction Extended Abstracts, pp. 179–180. ACM, New York (2015)
Chen, J.Y.C., Barnes, M.J., Selkowitz, A.R., Stowers, K.: Effects of agent transparency on human-autonomy teaming effectiveness. In: 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1838–1843. IEEE (2016)
Chen, J.Y.C., Procci, K., Boyce, M.W., Wright, J., Garcia, A., Barnes, M.J.: Situation awareness-based agent transparency, p. 36. Laboratory, U.S.A.R, U.S. Army Research Laboratory, Aberdeen Proving Ground (2014)
Lakhmani, S., Abich, J., Barber, D., Chen, J.: A proposed approach for determining the influence of multimodal robot-of-human transparency information on human-agent teams. In: Schmorrow, D.D., Fidopiastis, C.M. (eds.) Foundations of Augmented Cognition: Neuroergonomics and Operational Neuroscience: 10th International Conference, AC 2016, Held as Part of HCI International 2016, Toronto, ON, Canada, 17–22 July 2016, Proceedings, Part II, pp. 296–307. Springer International Publishing, Cham (2016)
Mercado, J.E., Rupp, M.A., Chen, J.Y.C., Barnes, M.J., Procci, K.: Intelligent agent transparency in human-agent teaming for multi-UxV management. Hum. Factors 58, 401–415 (2016)
Wright, J.L., Chen, J.Y.C., Barnes, M.J., Hancock, P.A.: Agent reasoning transparency’s effect on operator workload. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 60, 249–253 (2016)
Endsley, M.R.: Toward a theory of situation awareness in dynamic systems. Hum. Factors 37, 32–64 (1995)
Endsley, M.R.: Situation awareness misconceptions and misunderstandings. J. Cogn. Eng. Decis. Making 9, 4–32 (2015)
Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors J. Hum. Factors Ergon. Soc. 39, 230–253 (1997)
Moray, N., Inagaki, T., Makoto, I.: Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. J. Exp. Psychol. Appl. 6, 44–58 (2000)
Trujillo, A.C.: How electronic questionnaire formats affect scaled responses. In: 15th International Symposium on Aviation Psychology, Dayton, OH (2009)
Acknowledgments
This research was supported by NASA Langley Research Center IRAD funding in 2016.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer International Publishing AG, part of Springer Nature (outside the USA)
About this paper
Cite this paper
Trujillo, A.C. (2019). Operator Trust Function for Predicted Drone Arrival. In: Chen, J. (eds) Advances in Human Factors in Robots and Unmanned Systems. AHFE 2018. Advances in Intelligent Systems and Computing, vol 784. Springer, Cham. https://doi.org/10.1007/978-3-319-94346-6_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-94346-6_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-94345-9
Online ISBN: 978-3-319-94346-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)