Abstract
Automation has entered nearly every aspect of our lives, but it often remains hard to understand. Why is this? Automation is often brittle, requiring constant human oversight to assure it operates as intended. This oversight has become harder as automation has become more complicated. To resolve this problem, Human-Autonomy Teaming (HAT) has been proposed. HAT is based on advances in providing automation transparency, a method for giving insight into the reasoning behind automated recommendations and actions, along with advances in human automation communications (e.g., voice). These, in turn, permit more trust in the automation when appropriate, and less when not, allowing a more targeted supervision of automated functions. This paper proposes a framework for HAT, incorporating three key tenets: transparency, bi-directional communication, and operator directed authority. These tenets, along with more capable automation, represent a shift in human-automation relations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Singularity is the concept that artificial intelligence will eventually think beyond human capacity, which according to some could negatively affect civilization.
References
The Observer. http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence
PBS. http://www.pbs.org/newshour/rundown/driver-killed-in-self-driving-car-accident-for-first-time
Lee, D.D.: Review of a pivotal human factors article: “humans and automation: use, misuse, disuse, abuse”. Hum. Factors 50, 404–410 (2008)
Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39, 230–253 (1997)
Parasuraman, R., Manzey, D.H.: Complacency and bias in human use of automation: an attentional integration. Hum. Factors 52, 381–410 (2010)
Billings, C.E.: Human-Centered Aviation Automation: Principles and Guidelines. NASA, Washington, DC (1996). NASA-TM-110381
Chen, J.Y.C., Barnes, M.J.: Human-Agent Teaming for Multi-Robot Control: A Literature Review. Army Research Lab Technical report, ARL-TR-6328 (2013)
The Atlantic. https://www.theatlantic.com/magazine/archive/2013/03/the-robot-will-see-you-now/309216/
Christoffersen, K., Woods, D.D.: How to make automated systems team players. In: Advances in Human Performance and Cognitive Engineering Research, vol. 2, pp. 1–12. Emerald Group Publishing Limited (2002)
Wiener, E.L.: Cockpit automation. In: Wiener, E.L., Nagel, D.C. (eds.) Human Factors in Aviation, pp. 433–461. Academic Press Inc., New York (1989)
Onken, R.:. The cockpit assistant system CASSY as an on-board player in the ATM environment. In: Proceedings of First Air Traffic Management Research and Development Seminar, pp. 1–26 (1997)
Lyons, J.B., Saddler, G.G., Koltai, K., Battiste, H., Ho, N.T., Hoffmann, L.C., Smith, D., Johnson, W., Shively, R.: Shaping trust through transparent design: theoretical and experimental guidelines. Adv. Hum. Factors Robot. Unmanned Syst. 499, 127–136 (2017)
Sadler, G., Battiste, H., Ho, N., Hoffmann, L., Johnson, W., Shively, R., Lyons, J., Smith, D.: Effects of transparency on pilot trust and agreement in the autonomous constrained flight planner. In: Digital Avionics Systems Conference (DASC) IEEE/AIAA 35th, pp. 1–9. IEEE (2016)
Lee, D.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46, 50–80 (2004)
Lees, M.N., Lee, J.D.: The influence of distraction and driving context on driver response to imperfect collision warning systems. Ergonomics 50, 1264–1286 (2007)
Hergeth, S., Lorenz, L., Vilimek, R., Krems, J.F.: keep your scanners peeled: gace behavior as a measure of automation trust during highly automated driving. Hum. Factors 58, 5–27 (2016)
Endsley, M.R.: From here to autonomy: lessons learned from human-automation research. Hum. Factors 59, 5–27 (2017)
Christoffersen, K., Woods, D.D.: How to make automated systems team players. In: Advances in Human Performance and Cognitive Engineering Research, pp. 1–12. Emerald Group Publishing Limited (2002)
Alberts, D.S., Garstka, J.J., Hayes, R.E., Signori, D.A.: Understanding Information Age Warfare. Command Control Research Program, Washington, DC (2001)
Chen, J.Y.C., Barnes, M.J., Harper-Sciarini, M.: Supervisory control of multiple robots: human performance issues and user interface design. IEEE Trans. Syst. Man Cybern.–Part C: Appl. Rev. 41, 435–454 (2011)
Lyons, J.B.: Being transparent about transparency: a model for human robot interaction. In: AIAA Spring Symposium Series (2013)
Miller, C.A., Parasuraman, R.: Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control. Hum. Factors 49, 57–75 (2007)
Acknowledgments
We would like to acknowledge NASA’s Safe and Autonomous System Operations Project, which funded this research.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG (outside the USA)
About this paper
Cite this paper
Shively, R.J., Lachter, J., Brandt, S.L., Matessa, M., Battiste, V., Johnson, W.W. (2018). Why Human-Autonomy Teaming?. In: Baldwin, C. (eds) Advances in Neuroergonomics and Cognitive Engineering. AHFE 2017. Advances in Intelligent Systems and Computing, vol 586. Springer, Cham. https://doi.org/10.1007/978-3-319-60642-2_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-60642-2_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-60641-5
Online ISBN: 978-3-319-60642-2
eBook Packages: EngineeringEngineering (R0)