Skip to main content

Autoencoder

  • Living reference work entry
  • First Online:
Computer Vision

Synonyms

Encoder-decoder architectures

Related Concepts

Definition

An autoencoder is a deep neural architecture comprising two parts, namely, (1) an encoder network that maps each input data point to a point in a different (latent) space and (2) a decoder network that maps the points in the latent space back to the data space. The two components are trained jointly in an unsupervised way, so that their composition approximately preserves points from a given training dataset.

Background

Autoencoders are a very popular deep architecture for unsupervised learning going back to at least 1980s [1, 2]. Similar to other unsupervised learning methods such as principal component analysis [3], the objective of autoencoder learning is to find some latent representation of the points in a training dataset that...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Baldi P, Hornik K (1989) Neural networks and principal component analysis: learning from examples without local minima. Neural Netw 2(1):53–58

    Article  Google Scholar 

  2. Hinton GE, Zemel RS (1994) Autoencoders, minimum description length and helmholtz free energy. In: Advances in neural information processing systems, MIT Press, pp 3–10

    Google Scholar 

  3. Pearson K (1901) On lines and planes of closest fit to systems of points in space. Philos Mag 2(6):559–572

    Article  Google Scholar 

  4. LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, Jackel LD (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551

    Article  Google Scholar 

  5. Johnson J, Alahi A, Fei-Fei L (2016) Perceptual losses for real-time style transfer and super-resolution. In: European conference on computer vision, pp 694–711. Springer

    Google Scholar 

  6. Andrew NG et al (2011) Sparse autoencoder. CS294A Lect Notes 72(2011):1–19

    Google Scholar 

  7. Rifai S, Vincent P, Muller X, Glorot X, Bengio Y (2011) Contractive auto-encoders: explicit invariance during feature extraction. In: Proceedings of the 28th international conference on international conference on machine learning, pp 833–840. Omnipress, Bellevue

    Google Scholar 

  8. Vincent P, Larochelle H, Bengio Y, Manzagol P-A (2008) Extracting and composing robust features with denoising autoencoders. In: Proceedings of the 25th international conference on machine learning, pp 1096–1103. ACM, Haifa

    Google Scholar 

  9. Kingma DP, Welling M (2013) Auto-encoding variational bayes. arXiv preprint: 1312.6114

    Google Scholar 

  10. Rezende DJ, Mohamed S, Wierstra D (2014) Stochastic backpropagation and approximate inference in deep generative models. arXiv preprint: 1401.4082

    Google Scholar 

  11. Theis L, Shi W, Cunningham A, Huszár F (2017) Lossy image compression with compressive autoencoders. arXiv preprint: 1703.00395

    Google Scholar 

  12. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680

    Google Scholar 

  13. Makhzani A, Shlens J, Jaitly N, Goodfellow I, Frey B (2015) Adversarial autoencoders. arXiv preprint: 1511.05644

    Google Scholar 

  14. Zhao J, Mathieu M, LeCun Y (2016) Energy-based generative adversarial network. arXiv preprint: 1609.03126

    Google Scholar 

  15. Bojanowski P, Joulin A, Lopez-Pas D, Szlam A (2018) Optimizing the latent space of generative networks. In: International conference on machine learning, pp 599–608

    Google Scholar 

  16. Martinelli M, Tronci E, Dipoppa G, Balducelli C (2004) Electric power system anomaly detection using neural networks. In: International conference on knowledge-based and intelligent information and engineering systems, pp 1242–1248. Springer, Wellington

    Google Scholar 

  17. Tagawa T, Tadokoro Y, Yairi T (2015) Structured denoising autoencoder for fault detection and analysis. In: Asian conference on machine learning, pp 96–111

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Lempitsky, V. (2020). Autoencoder. In: Computer Vision. Springer, Cham. https://doi.org/10.1007/978-3-030-03243-2_862-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03243-2_862-1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03243-2

  • Online ISBN: 978-3-030-03243-2

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics