Abstract
Advancements in robotics and machine learning technologies have increased the prevalence of human-machine interactions and collaborations in the workplace. Several studies have identified trust as a major factor in how efficiently human-machine interactions occur and in how errors are recognized and handled. Little work has been done to identify how this human-machine trust compares to human-human trust, and how an individual’s preference for human-sourced information may interfere with their human-machine relationships, and vice versa. Outside the workplace, people consume media that has become saturated by altered and out-of-context imagery. Thus, our ability to evaluate the veracity of graphical information has been compromised. Our experiment seeks to identify factors of implicit bias in how humans analyze information when it comes from a machine (algorithm), or from a human (subject-area expert). Our results highlight the need for developing a cultural computational literacy.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Holliday, D., Wilson, S., Stumpf, S.: User trust in intelligent systems: a journey over time. In: 21st International Conference on Intelligent User Interfaces, IUI 2016, pp. 164–168 (2016)
Friedman, B., Nissenbaum, H.: Bias in computer systems. ACM Trans. Inf. Syst. 14, 330–347 (1996)
Zanbaka, C., Ulinski, A., Goolkasian, P., Hodges, L.F.: Social responses to virtual humans: implications for future interface design. In: CHI 2007 Proceedings Social Influence, pp. 1561–1570 (2007)
Lewandowsky, S., Mundy, M., Tan, G.P.: The dynamics of trust: comparing humans to automation. J. Exp. Psychol. Appl. 6, 104–123 (2000)
Pu, P., Chen, L.: Trust building with explanation interfaces. In: Proceedings of 11th International Conference Intelligent User Interfaces, IUI 2006, pp. 93–100 (2006)
Sanders, T.L., Schafer, K.E., Volante, W., Reardon, A., Hancock, P.A.: Implicit attitudes toward robots, pp. 1746–1749 (2016)
Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144, 114–126 (2015)
Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot? In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2015, pp. 141–148. ACM Press, New York (2015)
Schetinger, V., Oliveira, M.M., da Silva, R., Carvalho, T.J.: Humans are easily fooled by digital images. Comput. Graph. 68, 142–151 (2017)
Greenwald, A.G., Krieger, L.H.: Implicit bias: scientific foundations. Calif. Law Rev. 94 (2006)
Greenwald, A.G., McGhee, D.E., Schwartz, J.L.K.: Measuring individual differences in implicit cognition: the implicit association test. J. Pers. Soc. Psychol. 74, 1464–1480 (1998)
Ullman, D., Malle, B.F.: Human-robot trust. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017. pp. 309–310. ACM Press, New York (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Williams, A., Sherman, I., Smarr, S., Posadas, B., Gilbert, J.E. (2019). Human Trust Factors in Image Analysis. In: Boring, R. (eds) Advances in Human Error, Reliability, Resilience, and Performance. AHFE 2018. Advances in Intelligent Systems and Computing, vol 778. Springer, Cham. https://doi.org/10.1007/978-3-319-94391-6_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-94391-6_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-94390-9
Online ISBN: 978-3-319-94391-6
eBook Packages: EngineeringEngineering (R0)