Definition
Transfer learning is a methodology in machine learning that exploits the representation and/or features previously learned from some other tasks to better learn a target task. Generally, transfer learning makes learning target task faster, and when the target task lacks training data, transfer learning improves performance. Formally, a domain \(\mathcal {D}=(\mathcal {X},P(X))\) consists of both the input space \(\mathcal {X}\) and a probability distribution P(X) where \(X \in \mathcal {X}\). Given a domain \(\mathcal {D}\), a task \(\mathcal {T}=(\mathcal {Y},f(\cdot ))\) includes both the label space \(\mathcal {Y}\) and an objective predictive function f(â‹…). In this entry, we consider transfer learning in the form of transferring from a source domain and task (\(\mathcal {D}_S,\mathcal {T}_S\)) to a target domain and task (\(\mathcal {D}_T,\mathcal {T}_T\))....
References
Pan SJ and Yang Q (2010) A survey on transfer learning. IEEE Trans Knowl Data Eng 22(10):1345–1359
Dai W, Yang Q, Xue G, Yu Y (2007) Boosting for transfer learning. In: ICML
Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? In: NIPS
Ge W, Yu Y (2017) Borrowing Treasures from the wealthy: deep transfer learning through selective joint fine-tuning. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), pp 10–19
Zadrozny B (2004) Learning and evaluating classifiers under sample selection bias. In: ICML
Huang J, Smola AJ, Gretton A, Borgwardt KM, Schölkopf B (2006) Correcting sample selection bias by unlabeled data. In: NIPS
Pan SJ, Kwok JT, Yang Q (2008) Transfer learning via dimensionality reduction. In: AAAI
Pan SJ, Tsang IW, Kwok JT, Yang Q (2011) Domain adaptation via transfer component analysis. IEEE Trans Neural Netw 22(2):199–210
Long M, Wang J (2015) Learning transferable features with deep adaptation networks. In: ICML
Sun B, Feng J, Saenko K (2016) Return of frustratingly easy domain adaptation. In: AAAI
Sankaranarayanan S, Balaji Y, Castillo CD, Chellappa R (2018) Generate to adapt: aligning domains using generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8503–8512
Blitzer J, Dredze M, Pereira F (2007) Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In: Proceedings of the 45th annual meeting of the association of computational linguistics, pp 440–447
Zheng VW, Xiang EW, Yang Q, Shen D (2008) Transferring localization models over time. In: AAAI, pp 1421–1426
Mayer N, Ilg E, Hausser P, Fischer P, Cremers D, Dosovitskiy A, Brox T (2016) A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4040–4048
Kornblith S, Shlens J, Le, QV (2018) Do better imagenet models transfer better? In: 2019 IEEE conference on computer vision and pattern recognition (CVPR)
Peng X, Usman B, Kaushik N, Wang D, Hoffman J, Saenko K (2018) VisDA: a synthetic-to-real benchmark for visual domain adaptation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp 2102–21025
Panareda Busto P, Gall J (2017) Open set domain adaptation. In: Proceedings of the IEEE international conference on computer vision, pp 754–763
Author information
Authors and Affiliations
Corresponding author
Section Editor information
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this entry
Cite this entry
Chin, TW., Zhang, C. (2020). Transfer Learning. In: Computer Vision. Springer, Cham. https://doi.org/10.1007/978-3-030-03243-2_837-1
Download citation
DOI: https://doi.org/10.1007/978-3-030-03243-2_837-1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-03243-2
Online ISBN: 978-3-030-03243-2
eBook Packages: Springer Reference Computer SciencesReference Module Computer Science and Engineering