Skip to main content

Why Human-Autonomy Teaming?

  • Conference paper
  • First Online:
Advances in Neuroergonomics and Cognitive Engineering (AHFE 2017)

Abstract

Automation has entered nearly every aspect of our lives, but it often remains hard to understand. Why is this? Automation is often brittle, requiring constant human oversight to assure it operates as intended. This oversight has become harder as automation has become more complicated. To resolve this problem, Human-Autonomy Teaming (HAT) has been proposed. HAT is based on advances in providing automation transparency, a method for giving insight into the reasoning behind automated recommendations and actions, along with advances in human automation communications (e.g., voice). These, in turn, permit more trust in the automation when appropriate, and less when not, allowing a more targeted supervision of automated functions. This paper proposes a framework for HAT, incorporating three key tenets: transparency, bi-directional communication, and operator directed authority. These tenets, along with more capable automation, represent a shift in human-automation relations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Singularity is the concept that artificial intelligence will eventually think beyond human capacity, which according to some could negatively affect civilization.

References

  1. The Observer. http://observer.com/2015/08/stephen-hawking-elon-musk-and-bill-gates-warn-about-artificial-intelligence

  2. PBS. http://www.pbs.org/newshour/rundown/driver-killed-in-self-driving-car-accident-for-first-time

  3. Lee, D.D.: Review of a pivotal human factors article: “humans and automation: use, misuse, disuse, abuse”. Hum. Factors 50, 404–410 (2008)

    Article  Google Scholar 

  4. Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39, 230–253 (1997)

    Article  Google Scholar 

  5. Parasuraman, R., Manzey, D.H.: Complacency and bias in human use of automation: an attentional integration. Hum. Factors 52, 381–410 (2010)

    Article  Google Scholar 

  6. Billings, C.E.: Human-Centered Aviation Automation: Principles and Guidelines. NASA, Washington, DC (1996). NASA-TM-110381

    Google Scholar 

  7. Chen, J.Y.C., Barnes, M.J.: Human-Agent Teaming for Multi-Robot Control: A Literature Review. Army Research Lab Technical report, ARL-TR-6328 (2013)

    Google Scholar 

  8. The Atlantic. https://www.theatlantic.com/magazine/archive/2013/03/the-robot-will-see-you-now/309216/

  9. Christoffersen, K., Woods, D.D.: How to make automated systems team players. In: Advances in Human Performance and Cognitive Engineering Research, vol. 2, pp. 1–12. Emerald Group Publishing Limited (2002)

    Google Scholar 

  10. Wiener, E.L.: Cockpit automation. In: Wiener, E.L., Nagel, D.C. (eds.) Human Factors in Aviation, pp. 433–461. Academic Press Inc., New York (1989)

    Google Scholar 

  11. Onken, R.:. The cockpit assistant system CASSY as an on-board player in the ATM environment. In: Proceedings of First Air Traffic Management Research and Development Seminar, pp. 1–26 (1997)

    Google Scholar 

  12. Lyons, J.B., Saddler, G.G., Koltai, K., Battiste, H., Ho, N.T., Hoffmann, L.C., Smith, D., Johnson, W., Shively, R.: Shaping trust through transparent design: theoretical and experimental guidelines. Adv. Hum. Factors Robot. Unmanned Syst. 499, 127–136 (2017)

    Article  Google Scholar 

  13. Sadler, G., Battiste, H., Ho, N., Hoffmann, L., Johnson, W., Shively, R., Lyons, J., Smith, D.: Effects of transparency on pilot trust and agreement in the autonomous constrained flight planner. In: Digital Avionics Systems Conference (DASC) IEEE/AIAA 35th, pp. 1–9. IEEE (2016)

    Google Scholar 

  14. Lee, D.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46, 50–80 (2004)

    Article  Google Scholar 

  15. Lees, M.N., Lee, J.D.: The influence of distraction and driving context on driver response to imperfect collision warning systems. Ergonomics 50, 1264–1286 (2007)

    Article  Google Scholar 

  16. Hergeth, S., Lorenz, L., Vilimek, R., Krems, J.F.: keep your scanners peeled: gace behavior as a measure of automation trust during highly automated driving. Hum. Factors 58, 5–27 (2016)

    Article  Google Scholar 

  17. Endsley, M.R.: From here to autonomy: lessons learned from human-automation research. Hum. Factors 59, 5–27 (2017)

    Article  Google Scholar 

  18. Christoffersen, K., Woods, D.D.: How to make automated systems team players. In: Advances in Human Performance and Cognitive Engineering Research, pp. 1–12. Emerald Group Publishing Limited (2002)

    Google Scholar 

  19. Alberts, D.S., Garstka, J.J., Hayes, R.E., Signori, D.A.: Understanding Information Age Warfare. Command Control Research Program, Washington, DC (2001)

    Google Scholar 

  20. Chen, J.Y.C., Barnes, M.J., Harper-Sciarini, M.: Supervisory control of multiple robots: human performance issues and user interface design. IEEE Trans. Syst. Man Cybern.–Part C: Appl. Rev. 41, 435–454 (2011)

    Article  Google Scholar 

  21. Lyons, J.B.: Being transparent about transparency: a model for human robot interaction. In: AIAA Spring Symposium Series (2013)

    Google Scholar 

  22. Miller, C.A., Parasuraman, R.: Designing for flexible interaction between humans and automation: delegation interfaces for supervisory control. Hum. Factors 49, 57–75 (2007)

    Article  Google Scholar 

Download references

Acknowledgments

We would like to acknowledge NASA’s Safe and Autonomous System Operations Project, which funded this research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joel Lachter .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG (outside the USA)

About this paper

Cite this paper

Shively, R.J., Lachter, J., Brandt, S.L., Matessa, M., Battiste, V., Johnson, W.W. (2018). Why Human-Autonomy Teaming?. In: Baldwin, C. (eds) Advances in Neuroergonomics and Cognitive Engineering. AHFE 2017. Advances in Intelligent Systems and Computing, vol 586. Springer, Cham. https://doi.org/10.1007/978-3-319-60642-2_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-60642-2_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-60641-5

  • Online ISBN: 978-3-319-60642-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics