Skip to main content

Learning-from-Observation

  • Living reference work entry
  • First Online:
Computer Vision
  • 18 Accesses

Synonyms

Imitation learning; Learning from demonstration; Programming by demonstration

Related Concepts

Definition

Learning-from-Observation is the framework to generate robot’s (or other agent’s) movement to achieve a target task with less user’s programming effort. In this framework, a user just demonstrates the target task and a robot learns the method to reproduce the target task from the observation.

Background

One goal in robotics and artificial intelligence (AI) fields is to achieve any target tasks by a robot with less user’s programming efforts. To achieve this purpose, Learning-from-Observation (LFO) framework attracts an attention. In this framework, a user just demonstrates the target task and a robot learns the method to reproduce the target task from the observation. The user is not required to have special skills on robotics. Note that an agent that the user teaches is not limited to a robot.

In a scientific point of view, it is...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

References

  1. Oztop E, Kawato M, Arbib M (2006) Mirror neurons and imitation: a computationally guided review. Neural Netw 19(3):254–271

    Article  Google Scholar 

  2. Bates T, Ramirez-Amaro K, Inamura T, Cheng G (2017) On-line simultaneous learning and recognition of everyday activities from virtual reality performances. In: Proceedings of the International Conference on Intelligent Robots and Systems, pp 3510–3515

    Google Scholar 

  3. Kato H, Billinghurst M (1999) Marker tracking and HMD calibration for a video-based augmented reality conferencing system. In: Proceedings of the 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR 99), pp 85–94

    Google Scholar 

  4. Kuniyoshi Y (2015) Learning from examples: imitation learning and emerging cognition. In: Cheng G (ed) Humanoid robotics and neuroscience: science, engineering and society, chapter 8. CRC Press/Taylor & Francis. https://pubmed.ncbi.nlm.nih.gov/26065071/

  5. Ramirez-Amaro K, Beetz M, Cheng G (2017) Transferring skills to humanoid robots by extracting semantic representations from observations of human activities. Artif Intell 247:95–118

    Article  MathSciNet  Google Scholar 

  6. Elbanhawi M, Simic M (2014) Sampling-based robot motion planning: a review. IEEE Access 2:56–77

    Article  Google Scholar 

  7. Ikeuchi K, Suehiro T (1994) Toward an assembly plan from observation. I. Task recognition with polyhedral objects. IEEE Trans Robot Autom 10(3):368–385

    Article  Google Scholar 

  8. Kang SB, Ikeuchi K (Feb. 1997) Toward automatic robot instruction from perception – mapping human grasps to manipulator grasps. IEEE Trans Robot Autom 13(1):81–95

    Article  Google Scholar 

  9. Takamatsu J, Morita T, Ogawara K, Kimura H, Ikeuchi K (2006) Representation for knot-tying tasks. IEEE Trans Robot 22(1):65–78

    Article  Google Scholar 

  10. Ikeuchi K, Ma Z, Yan Z, Kudoh S, Nakamura M (2018) Describing upper-body motions based on labanotation for learning-from-observation robots. Int J Comput Vis 126(12):1415–1429

    Article  Google Scholar 

  11. Hussein A, Gaber MM, Elyan E, Jayne C (2017) Imitation learning: a survey of learning methods. ACM Comput Surv 50(2):1995–2014

    Article  Google Scholar 

  12. Billard A, Calinon S, Dillmann R, Schaal S (2008) Survey: robot programming by demonstration. In: Handbook of robotics, vol 59(BOOK_CHAP). https://dl.acm.org/doi/10.1016/j.robot.2008.10.024

  13. Argall BD, Chernova S, Veloso M, Browning B (2009) A survey of robot learning from demonstration. Robot Auton Syst 57(5):469–483

    Article  Google Scholar 

  14. Ijspeert AJ, Nakanishi J, Hoffmann H, Pastor P, Schaal S (2013) Dynamical movement primitives: learning attractor models for motor behaviors. Neural Comput 25(2):328–373

    Article  MathSciNet  Google Scholar 

  15. Calinon S, D’halluin F, Caldwell DG, Billard AG (2009) Handling of multiple constraints and motion alternatives in a robot programming by demonstration framework. In: 2009 9th IEEE-RAS International Conference on Humanoid Robots, pp 582–588

    Google Scholar 

  16. Schulman J, Ho J, Lee C, Abbeel P (2016) Learning from demonstrations through the use of non-rigid registration. Springer International Publishing, Cham, pp 339–354

    Google Scholar 

  17. Ye G, Alterovitz R (2017) Demonstration-guided motion planning. Springer International Publishing, pp 291–307

    Google Scholar 

  18. Kramberger A, Piltaver R, Nemec B, Gams M, Ude A (2016) Learning of assembly constraints by demonstration and active exploration. Ind Robot 43(5): 524–534

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun Takamatsu .

Section Editor information

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this entry

Check for updates. Verify currency and authenticity via CrossMark

Cite this entry

Takamatsu, J. (2021). Learning-from-Observation. In: Computer Vision. Springer, Cham. https://doi.org/10.1007/978-3-030-03243-2_869-1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-03243-2_869-1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-03243-2

  • Online ISBN: 978-3-030-03243-2

  • eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering

Publish with us

Policies and ethics