Skip to main content

Deep Generative Models

  • Living reference work entry
  • First Online:
Computer Vision
  • 39 Accesses

Synonyms

Deep generator networks

Related Concepts

Definition

Deep generative models, or deep generator networks, refer to a family of deep networks that take in an input tensor z and then output a sample of certain patterns. In computer vision, such patterns could be specific object categories, such as cats, as shown in Fig. 1. The input tensor z could be as simple as a randomly generated vector. The deep generative model can be trained with a set of images in an unsupervised way. Two popular algorithmic formulations are the generative adversarial networks (GANs) [9] and the variational auto-encoder (VAE) [14].

Fig. 1
figure 1

A deep generative model (a generator network) takes a tensor as an input and outputs a sample following the distribution of certain patterns. The figure presents a generator network for cat images

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    https://en.wikipedia.org/wiki/Generative_adversarial_network

References

  1. Ackley DH, Hinton GE, Senjnowski TJ (1985) A learning algorithm for boltzmann machines. Cogn Sci 9(1):147–169

    Article  Google Scholar 

  2. Andrew Blake PK, Rother C (eds) (2011) Markov random fields for vision and image processing. The MIT Press, Cambridge, MA

    MATH  Google Scholar 

  3. Bao J, Chen D, Wen F, Li H, Hua G (2018) Towards open-set identity preserving face synthesis. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6713–6722

    Google Scholar 

  4. Blei DM, Ng AY, Jordan MI (2003) Latent dirichlet allocation. J Mach Learn Res 3:993–1022

    MATH  Google Scholar 

  5. Bloesch M, Czarnowski J, Clark R, Leutenegger S, Davison AJ (2018) Codeslam – learning a compact, optimisable representation for dense visual SLAM. In: Proceedings of the IEEE conference on computer vision and pattern recognition

    Book  Google Scholar 

  6. Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the em algorithm. J R Stat Soc Ser B (Methodol) 39(1):1–38

    MathSciNet  MATH  Google Scholar 

  7. Doersch C (2016) Tutorial on variational autoencoders. arXiv, stat.ML 1606.05908

    Google Scholar 

  8. Ghahramani Z, Beal MJ (2001) Graphical models and variational methods. In: Advanced mean field methods – theory and practice. Neural information processing series. MIT Press, Cambridge, MA

    Google Scholar 

  9. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Advances in neural information processing systems, vol 27, pp 2672–2680. Curran Associates, Inc.

    Google Scholar 

  10. Gutmann M, Hyvarinen A (2010) Noise-contrastive estimation: a new estimation principle for unnormalized statistical models. In: Proceedings of international conference on artificial intelligence and statistics, Sardinia

    MATH  Google Scholar 

  11. Hinton GE (2002) Training products of experts by minimizing contrastive divergence. Neural Comput 14(8):1771–1800

    Article  Google Scholar 

  12. Hinton GE, Osindero S, whye Teh Y (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554

    Google Scholar 

  13. Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Long Beach

    Book  Google Scholar 

  14. Kingma DP, Welling M (2014) Auto-encoding variational bayes. In: Bengio Y, LeCun Y (eds) Interactional conference on learning representation

    Google Scholar 

  15. LeCun Y, Chopra S, Hadsell R, Ranzato MA, Huang FJ (2006) A tutorial on energy-based learning. In: Predicting structured data. The MIT Press, Cambridge, MA

    Google Scholar 

  16. Peral J, Russell S (2003) Bayesian networks. In: Handbook of brain theory and neural networks. MIT Press, Cambridge, MA, pp 157–160

    Google Scholar 

  17. Rabiner LR (1989) A tutorial on hidden markov models and selected applications in speech recognition. Proc IEEE 77(2):257–286

    Article  Google Scholar 

  18. Schawinski K, Zhang C, Zhang H, Fowler L, Santhanam GK (2017) Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit. Mon Not R Astron Soc Lett 467(1):L110–L114

    Google Scholar 

  19. Tu Z (2007) Learning generative models via discriminative approaches. In: Proceedings of IEEE conference on computer vision and pattern recognition, Minneapolis

    Book  Google Scholar 

  20. Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: IEEE international conference on computer vision

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gang Hua .

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Hua, G. (2020). Deep Generative Models. In: Computer Vision. Springer, Cham. https://doi.org/10.1007/978-3-030-03243-2_865-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03243-2_865-1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03243-2

  • Online ISBN: 978-3-030-03243-2

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics