Skip to main content

Energy Minimization

  • Living reference work entry
  • First Online:
Computer Vision

Synonyms

Convex optimization; Convex minimization; Discrete optimization

Related Concepts

Definition

Energy minimization is a subtopic of optimization, where we minimize some energy/cost function by suitable algorithms. The type of energy is twofold: continuous and discrete.

Background

In computer vision problems, we often encounter situations of minimizing energy functions (or cost functions) that are designed to estimate, reconstruct, or process something. There are two major categories of energy minimization. One is that the target can be represented as a vector in the N-dimensional Euclidean space, i.e., the case of continuous energy. For example, if the target is a gray scale image and if each pixel value is handled as a real value, then the image can be treated as an N-dimensional vector, where Nis the number of the pixels. The other is that the target takes a discrete value, i.e., the case of discrete energy. A typical example is image segmentation...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Aji SM, McEliece RJ (2000) The generalized distributive law. IEEE Trans Inf Theory 46(2):325–343

    Article  MathSciNet  Google Scholar 

  2. Bauschke HH, Combettes PL (2017) Convex analysis and monotone operator theory in Hilbert spaces, 2nd edn. Springer

    Book  Google Scholar 

  3. Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci 2:183–202

    Article  MathSciNet  Google Scholar 

  4. Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–122

    Article  Google Scholar 

  5. Boykov Y, Veksler O (2006) Graph cuts in vision and graphics: theories and applications, chapter 5. In: Handbook of mathematical models in computer vision. Springer, pp 79–96

    Google Scholar 

  6. Bresson X, Chan TF (2008) Fast dual minimization of the vectorial total variation norm and applications to color image processing. Inverse Probl Imaging 2(4):455–484

    Article  MathSciNet  Google Scholar 

  7. Chambolle A, Pock T (2010) A first-order primal-dual algorithm for convex problems with applications to imaging. J Math Imaging Vis 40(1):120–145

    Article  MathSciNet  Google Scholar 

  8. Combettes PL, Wajs V (2005) Signal recovery by proximal forward-backward splitting. SIAM J Multi Model Simul 4:1168–1200

    Article  MathSciNet  Google Scholar 

  9. Daubechies I, De Friese M, De Mol C (2004) An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun Pure Appl Math 57:1413–1457

    Article  MathSciNet  Google Scholar 

  10. Gabay D, Mercier B (1976) A dual algorithm for the solution of nonlinear variational problems via finite elements approximations. Comput Math Appl 2:17–40

    Article  Google Scholar 

  11. Goldluecke B, Strekalovskiy E, Cremers D (2012) The natural vectorial total variation which arises from geometric measure theory. SIAM J Imaging Sci 5(2):537–563

    Article  MathSciNet  Google Scholar 

  12. Koller D, Friedman N (2009) Inference as optimization, chapter 11. In: Probabilistic graphical models: principles and techniques. The MIT Press, Cambridge, MA, pp 381–485

    Google Scholar 

  13. Koller D, Friedman N (2009) MAP inference. In: Probabilistic graphical models: principles and techniques. The MIT Press, Cambridge, MA, pp 551–604

    Google Scholar 

  14. Morioka N, Satoh S (2011) Generalized lasso based approximation of sparse coding for visual recognition. In: Advances in neural information processing systems, pp 181–189

    Google Scholar 

  15. Murphy KP (2012) Machine learning: a probabilistic perspective. The MIT Press, Cambridge, MA

    MATH  Google Scholar 

  16. Ono S, Yamada I (2014) Decorrelated vectorial total variation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4090–4097

    Google Scholar 

  17. Passty GB (1979) Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J Math Anal Appl (72):383–390

    Article  MathSciNet  Google Scholar 

  18. Rudin LI, Osher S, Fatemi E (1992) Nonlinear total variation based noise removal algorithms. Phys D 60(1–4):259–268

    Article  MathSciNet  Google Scholar 

  19. Souly N, Shah M (2016) Visual saliency detection using group lasso regularization in videos of natural scenes. Int J Comput Vis 117(1):93–110

    Article  MathSciNet  Google Scholar 

  20. Szeliski R (2011) Computer vision: algorithms and applications. Springer

    Book  Google Scholar 

  21. Tibshirani R (1996) Regression shrinkage and selection via the lasso. J R Stat Soc Ser B (Methodol) 58(1):267–288

    MathSciNet  MATH  Google Scholar 

  22. Xin B, Tian Y, Wang Y, Gao W (2015) Background subtraction via generalized fused lasso foreground modeling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4676–4684

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shunsuke Ono .

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Ono, S., Saito, M. (2020). Energy Minimization. In: Computer Vision. Springer, Cham. https://doi.org/10.1007/978-3-030-03243-2_833-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03243-2_833-1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03243-2

  • Online ISBN: 978-3-030-03243-2

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics