Abstract
Although artificial intelligence has advanced greatly so surpassed human performance in many tasks, interpretability for their processing and results still be a weakness because artificial intelligence is not able to learn Gestalt cognition, a unique phenomenon in humans. The Gestalt cognition also is an enhance version for well-known “Qunie’s uncertainty thesis”, and both implicit that human understanding of images and words is not well defined. Whether Gestalt cognition as unique cognitive patterns can be expressed by the deep neural networks or not is focus of this paper, and some potential networks models are discussed and further direction will be given.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Quine, W.V.O.: Word and Object. MIT Press, Cambridge (1960)
Ritter, S., Barrett, D.G.T., Santoro, A., Botvinick, M.: Cognitive psychology for deep neural networks: a shape bias case study in ICML (2018)
Koffka, K.: Principles of Gestalt Psychology. Routledge, Abingdon (1999)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp.1–9 (2015)
Vinyals, O., Blundell, C., Lillicrap, T., Kavukcuoglu, K., Wierstra, D.: Matching networks for one shot learning. arXiv:1606.04080 (2016)
Amanatiadis, A., Kaburlasos, V., Kosmatopoulos, E.: Understanding deep convolutional networks through gestalt theory. In: IEEE International Conference on Imaging Systems and Techniques (IST) (2018)
Sylvain, T., Zhang, P., Bengio, Y., Hjelm, R., Sharma, S.: Object-centric image genoration from layouts. In: CVPR (2020)
Dosovitskiy, A., Brox, T.: Generating images with perceptual similarity metrics based on deep networks. In: NIPS, pp.658–666 (2016)
Johnson, J., Gupta, A., Fei-Fei, L.: Image generation from scene graphs. In CVPR (2018)
El-Assady, M., et al.: Towards explainable artificial intelligence: structuring the processes of explanations. In: ACM CHI 2019. Workshop: Human–Centered Machine Learning Perspectives (2019)
Acknowledgements
This work is supported by YongTong project from the Academy of Humanities and Social Sciences.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Fei, D. (2021). Deep Neural Networks as Interpretable Cognitive Models for the Quine’s Uncertainty Thesis. In: Ahram, T.Z., Karwowski, W., Kalra, J. (eds) Advances in Artificial Intelligence, Software and Systems Engineering. AHFE 2021. Lecture Notes in Networks and Systems, vol 271. Springer, Cham. https://doi.org/10.1007/978-3-030-80624-8_18
Download citation
DOI: https://doi.org/10.1007/978-3-030-80624-8_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-80623-1
Online ISBN: 978-3-030-80624-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)