Gaussian Process-Based Transfer Kernel Learning for Unsupervised Domain Adaptation
Abstract
:1. Introduction
- We propose a new deep UDA method, GPTKL, which introduces a cross-domain Gaussian process and shared classification model to achieve domain knowledge transfer and improve the discrimination ability of the model.
- We introduce the deep kernel learning strategy into the cross-domain Gaussian process to learn a deep feature space with both discriminability and transferability.
- We conduct experiments to verify the effectiveness of GPTKL and the transfer kernel function.
2. Related Work
2.1. Deep UDA Methods
2.2. Gaussian Process
3. Our Method
3.1. Motivation and Notations
3.2. Gaussian Process-Based Transfer Kernel Learning
3.3. Classification Model Learning on the Source and Target Domains
3.4. Loss Function and Training Procedure
Algorithm 1 The algorithm of GPTKL |
Input: , . Output: The network parameters .
|
4. Experiment Results and Analysis
4.1. Datasets and Experiment Settings
4.2. Experiment Results
4.3. Effectiveness Analysis
4.3.1. Sensitivity Analysis
4.3.2. Ablation Study
4.3.3. Feature Space Visualization
4.3.4. Distribution Discrepancy and Ideal Joint Hypothesis
4.3.5. Discrimination Ability of Kernels
4.3.6. Time Complexity of GPTKL
5. Conclusions
Author Contributions
Funding
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances on Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2961–2969. [Google Scholar]
- Gu, J.; Wang, Z.; Kuen, J.; Ma, L.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, G.; Cai, J.; et al. Recent advances in convolutional neural networks. Pattern Recognit. 2018, 77, 354–377. [Google Scholar] [CrossRef]
- Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
- Zhang, K.; Schölkopf, B.; Muandet, K.; Wang, Z. Domain adaptation under target and conditional shift. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 17–19 June 2013; pp. 819–827. [Google Scholar]
- Ben-David, S.; Blitzer, J.; Crammer, K.; Pereira, F. Analysis of representations for domain adaptation. Adv. Neural Inf. Process. Syst. 2006, 19, 1–8. [Google Scholar]
- Fernando, B.; Habrard, A.; Sebban, M.; Tuytelaars, T. Unsupervised visual domain adaptation using subspace alignment. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 2960–2967. [Google Scholar]
- Wang, M.; Deng, W. Deep visual domain adaptation: A survey. Neurocomputing 2018, 312, 135–153. [Google Scholar] [CrossRef]
- Long, M.; Cao, Y.; Wang, J.; Jordan, M. Learning transferable features with deep adaptation networks. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015; pp. 97–105. [Google Scholar]
- Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; Lempitsky, V. Domain-adversarial training of neural networks. J. Mach. Learn. Res. 2016, 17, 1–35. [Google Scholar]
- Long, M.; Zhu, H.; Wang, J.; Jordan, M.I. Deep transfer learning with joint adaptation networks, PMLR. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 2208–2217. [Google Scholar]
- Long, M.; Cao, Z.; Wang, J.; Jordan, M.I. Conditional adversarial domain adaptation. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, NA, Canada, 3–8 December 2018; pp. 1647–1657. [Google Scholar]
- Ge, P.; Ren, C.X.; Xu, X.L.; Yan, H. Unsupervised domain adaptation via deep conditional adaptation network. Pattern Recognit. 2023, 134, 109088. [Google Scholar] [CrossRef]
- Gretton, A.; Borgwardt, K.; Rasch, M.; Schölkopf, B.; Smola, A. A kernel method for the two-sample-problem. Adv. Neural Inf. Process. Syst. 2006, 19, 1–8. [Google Scholar]
- Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial networks. Commun. ACM 2020, 63, 139–144. [Google Scholar] [CrossRef]
- Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv 2014, arXiv:1411.1784. [Google Scholar]
- Chen, C.; Chen, Z.; Jiang, B.; Jin, X. Joint domain alignment and discriminative feature learning for unsupervised deep domain adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 3296–3303. [Google Scholar]
- Luo, L.; Chen, L.; Hu, S.; Lu, Y.; Wang, X. Discriminative and geometry-aware unsupervised domain adaptation. IEEE Trans. Cybern. 2020, 50, 3914–3927. [Google Scholar] [CrossRef]
- Chen, X.; Wang, S.; Long, M.; Wang, J. Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation. In Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 1081–1090. [Google Scholar]
- Xiao, N.; Zhang, L. Dynamic weighted learning for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15242–15251. [Google Scholar]
- Tian, Q.; Zhu, Y.; Sun, H.; Chen, S.; Yin, H. Unsupervised domain adaptation through dynamically aligning both the feature and label spaces. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 8562–8573. [Google Scholar] [CrossRef]
- Donahue, J.; Jia, Y.; Vinyals, O.; Hoffman, J.; Ning, Z.; Tzeng, E.; Darrell, T. DeCAF: A deep convolutional activation feature for generic visual recognition. In Proceedings of the International Conference on Machine Learning, Beijing, China, 21–26 June 2014; pp. 647–655. [Google Scholar]
- Yosinski, J.; Clune, J.; Bengio, Y.; Lipson, H. How transferable are features in deep neural networks? In Proceedings of the Advances in Neural Information Processing Systems, Montreal, NA, Canada, 8–13 December 2014; pp. 3320–3328. [Google Scholar]
- Sun, B.; Saenko, K. Deep coral: Correlation alignment for deep domain adaptation. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin, Germany, 2016; pp. 443–450. [Google Scholar]
- Zhu, Y.; Zhuang, F.; Wang, J.; Ke, G.; Chen, J.; Bian, J.; Xiong, H.; He, Q. Deep subdomain adaptation network for image classification. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 1713–1722. [Google Scholar] [CrossRef] [PubMed]
- Tzeng, E.; Hoffman, J.; Saenko, K.; Darrell, T. Adversarial discriminative domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7167–7176. [Google Scholar]
- Saito, K.; Watanabe, K.; Ushiku, Y.; Harada, T. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3723–3732. [Google Scholar]
- Xu, R.; Li, G.; Yang, J.; Lin, L. Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 1426–1435. [Google Scholar]
- Li, M.; Zhai, Y.M.; Luo, Y.W.; Ge, P.F.; Ren, C.X. Enhanced transport distance for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 13936–13944. [Google Scholar]
- Luo, Y.W.; Ren, C.X.; Dai, D.Q.; Yan, H. Unsupervised domain adaptation via discriminative manifold propagation. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 1653–1669. [Google Scholar] [CrossRef]
- Yu, K.; Tresp, V.; Schwaighofer, A. Learning Gaussian processes from multiple tasks. In Proceedings of the International Conference on Machine Learning, Bonn, Germany, 7–11 August 2005; pp. 1012–1019. [Google Scholar]
- Cao, B.; Pan, S.J.; Zhang, Y.; Yeung, D.Y.; Yang, Q. Adaptive transfer learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Atlanta, GA, USA, 11–15 July 2010; pp. 407–412. [Google Scholar]
- Wei, P.; Vo, T.V.; Qu, X.; Ong, Y.S.; Ma, Z. Transfer kernel learning for multi-source transfer gaussian process regression. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 3862–3876. [Google Scholar] [CrossRef] [PubMed]
- Wei, P.; Ke, Y.; Ong, Y.S.; Ma, Z. Adaptive Transfer Kernel Learning for Transfer Gaussian Process Regression. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 7142–7156. [Google Scholar] [CrossRef]
- Oussama, A.; Khaldi, B.; Kherfi, M.L. A fast weighted multi-view Bayesian learning scheme with deep learning for text-based image retrieval from unlabeled galleries. Multimed. Tools Appl. 2023, 82, 10795–10812. [Google Scholar] [CrossRef]
- Kim, M.; Sahu, P.; Gholami, B.; Pavlovic, V. Unsupervised visual domain adaptation: A deep max-margin gaussian process approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 4380–4390. [Google Scholar]
- Ben-David, S.; Blitzer, J.; Crammer, K.; Kulesza, A.; Pereira, F.; Vaughan, J.W. A theory of learning from different domains. Mach. Learn. 2010, 79, 151–175. [Google Scholar] [CrossRef]
- Ren, C.X.; Ge, P.; Dai, D.Q.; Yan, H. Learning Kernel for Conditional Moment-Matching Discrepancy-Based Image Classification. IEEE Trans. Cybern. 2019, 51, 2006–2018. [Google Scholar] [CrossRef]
- Chen, X.; Duan, Y.; Houthooft, R.; Schulman, J.; Sutskever, I.; Abbeel, P. InfoGAN: Interpretable representation learning by information maximizing generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; pp. 2172–2180. [Google Scholar]
- Ge, P.; Ren, C.X.; Dai, D.Q.; Feng, J.; Yan, S. Dual adversarial autoencoders for clustering. IEEE Trans. Neural Netw. Learn. Syst. 2020, 31, 1417–1424. [Google Scholar] [CrossRef] [PubMed]
- Saenko, K.; Kulis, B.; Fritz, M.; Darrell, T. Adapting Visual Category Models to New Domains. In Proceedings of the European Conference on Computer Vision, Heraklion Crete, Greece, 5–11 September 2010; pp. 213–226. [Google Scholar]
- Venkateswara, H.; Eusebio, J.; Chakraborty, S.; Panchanathan, S. Deep Hashing Network for Unsupervised Domain Adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5018–5027. [Google Scholar]
- Peng, X.; Usman, B.; Kaushik, N.; Hoffman, J.; Wang, D.; Saenko, K. Visda: The visual domain adaptation challenge. arXiv 2017, arXiv:1710.06924. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Zuo, L.; Jing, M.; Li, J.; Zhu, L.; Lu, K.; Yang, Y. Challenging tough samples in unsupervised domain adaptation. Pattern Recognit. 2021, 110, 107540. [Google Scholar] [CrossRef]
- Maaten, L.V.D.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
Methods | MNIST→USPS | USPS→MNIST | SVHN→MNIST | Average |
---|---|---|---|---|
DAN [9] | 80.3 ± 0.4 | 77.8 ± 0.4 | 73.5 ± 0.4 | 77.2 |
DANN [10] | 77.1 ± 1.8 | 73.0 ± 2.0 | 73.9 ± 0.1 | 74.7 |
CDAN [12] | 93.9 ± 0.4 | 96.9 ± 0.3 | 88.5 ± 0.1 | 93.1 |
CDAN+E [12] | 95.6 | 98.0 | 89.2 | 94.3 |
MCD [27] | 94.2 ± 0.5 | 94.1 ± 0.5 | 96.2 ± 0.4 | 94.8 |
BSP+CDAN [19] | 95.0 | 98.1 | 92.1 | 95.1 |
ETD [29] | 96.4 ± 0.2 | 96.3 ± 0.1 | 97.9 ± 0.1 | 96.9 |
GPDA [36] | 98.1 ± 0.1 | - | 98.2 ± 0.1 | 98.2 |
CTSN [46] | 96.1 ± 0.3 | 97.3 ± 0.2 | 97.1 ± 0.3 | 96.8 |
DWL [20] | 97.3 | 97.4 | 98.1 | 97.6 |
DSAN [25] | 96.9 ± 0.2 | 95.3 ± 0.1 | 90.1 ± 0.4 | 94.1 |
DAFL [21] | 97.0 ± 0.3 | 97.6 ± 0.1 | 97.9 ± 0.1 | 97.5 |
DMP [30] | 97.2 | 95.0 | 94.7 | 95.6 |
DCAN [13] | 97.5 ± 0.1 | 98.5 ± 0.1 | 98.7 ± 0.1 | 98.2 |
GPTKL(ours) | 98.6 ± 0.1 | 98.3 ± 0.1 | 99.1 ± 0.1 | 98.7 |
Methods | A→W | D→W | W→D | A→D | D→A | W→A | Average |
---|---|---|---|---|---|---|---|
Resnet [44] | 68.4 ± 0.2 | 96.7 ± 0.1 | 99.3 ± 0.1 | 68.9 ± 0.2 | 62.5 ± 0.3 | 60.7 ± 0.3 | 76.1 |
DAN [9] | 80.5 ± 0.4 | 97.1 ± 0.2 | 99.6 ± 0.1 | 78.6 ± 0.2 | 63.6 ± 0.3 | 62.8 ± 0.2 | 80.4 |
DANN [10] | 82.6 ± 0.2 | 96.9 ± 0.3 | 99.3 ± 0.1 | 81.5 ± 0.1 | 68.4 ± 0.5 | 67.5 ± 0.2 | 82.7 |
CDAN [12] | 93.1 ± 0.2 | 98.2 ± 0.2 | 100.0 ± 0.0 | 89.8 ± 0.3 | 70.1 ± 0.4 | 68.0 ± 0.4 | 86.6 |
CDAN+E [12] | 94.1 ± 0.1 | 98.6 ± 0.1 | 100.0 ± 0.0 | 92.9 ± 0.2 | 71.0 ± 0.3 | 69.3 ± 0.3 | 87.7 |
MCD [27] | 90.4 | 98.5 | 100.0 | 87.3 | 68.3 | 67.6 | 85.4 |
SAFN [28] | 88.8 ± 0.4 | 98.4 ± 0.0 | 99.8 ± 0.0 | 87.7 ± 1.3 | 69.8 ± 0.4 | 69.7 ± 0.2 | 85.7 |
GPDA [36] | 85.8 | 97.9 | 99.8 | 88.8 | 71.2 | 71.5 | 85.8 |
BSP+CDAN [19] | 93.3 ± 0.2 | 98.2 ± 0.2 | 100.0 ± 0.0 | 93.0 ± 0.2 | 73.6 ± 0.3 | 72.6 ± 0.3 | 88.5 |
ETD [29] | 92.1 ± 0.1 | 100.0 ± 0.0 | 100.0 ± 0.0 | 88.0 ± 0.2 | 71.0 ± 0.4 | 67.8 ± 0.1 | 86.2 |
CTSN [46] | 90.6 ± 0.3 | 98.6 ± 0.5 | 99.9 ± 0.1 | 89.3 ± 0.3 | 73.7 ± 0.4 | 74.1 ± 0.3 | 87.7 |
DWL [20] | 89.2 | 99.2 | 100.0 | 91.2 | 73.1 | 69.8 | 87.1 |
DSAN [25] | 93.6 ± 0.2 | 98.3 ± 0.1 | 100.0 ± 0.0 | 90.2 ± 0.7 | 73.5 ± 0.5 | 74.8 ± 0.4 | 88.4 |
DAFL [21] | 94.8 ± 0.5 | 100.0 ± 0.0 | 100.0 ± 0.0 | 93.9 ± 0.2 | 75.2 ± 0.3 | 74.2 ± 0.4 | 89.7 |
DMP [30] | 93.0 ± 0.3 | 99.0 ± 0.1 | 100.0 ± 0.0 | 91.0 ± 0.4 | 71.4 ± 0.2 | 70.2 ± 0.2 | 87.4 |
DCAN [13] | 93.2 ± 0.3 | 98.7 ± 0.1 | 100.0 ± 0.0 | 91.6 ± 0.4 | 74.6 ± 0.2 | 74.2 ± 0.2 | 88.7 |
GPTKL (ours) | 94.4 ± 0.4 | 99.2 ± 0.1 | 100.0 ± 0.0 | 94.8 ± 0.5 | 75.7 ± 0.5 | 76.7 ± 0.4 | 90.1 |
Methods | A→C | A→P | A→R | C→A | C→P | C→R | P→A | P→C | P→R | R→A | R→C | R→P | Average |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Resnet [44] | 34.9 | 50.0 | 58.0 | 37.4 | 41.9 | 46.2 | 38.5 | 31.2 | 60.4 | 53.9 | 41.2 | 59.9 | 46.1 |
DAN [9] | 43.9 | 57.0 | 67.9 | 45.8 | 56.5 | 60.4 | 44.0 | 43.6 | 67.7 | 63.1 | 51.5 | 74.3 | 56.3 |
DANN [10] | 45.6 | 59.3 | 70.1 | 47.0 | 58.5 | 60.9 | 46.1 | 43.7 | 68.5 | 63.2 | 51.8 | 76.8 | 57.6 |
CDAN [12] | 49.0 | 69.3 | 74.5 | 54.4 | 66.0 | 68.4 | 55.6 | 48.3 | 75.9 | 68.4 | 55.4 | 80.5 | 63.8 |
CDAN+E [12] | 50.7 | 70.6 | 76.0 | 57.6 | 70.0 | 70.0 | 57.4 | 50.9 | 77.3 | 70.9 | 56.7 | 81.6 | 65.8 |
MCD [27] | 51.7 | 72.2 | 78.2 | 63.7 | 69.5 | 70.8 | 61.5 | 52.8 | 78.0 | 74.5 | 58.4 | 81.8 | 67.8 |
SAFN [28] | 52.0 | 71.7 | 76.3 | 64.2 | 69.9 | 71.9 | 63.7 | 51.4 | 77.1 | 70.9 | 57.1 | 81.5 | 67.3 |
GPDA [36] | 45.0 | 62.0 | 73.3 | 51.0 | 59.6 | 64.0 | 54.2 | 45.3 | 73.2 | 62.8 | 50.0 | 78.0 | 59.9 |
BSP+CDAN [19] | 52.0 | 68.6 | 76.1 | 58.0 | 70.3 | 70.2 | 58.6 | 50.2 | 77.6 | 72.2 | 59.3 | 81.9 | 66.3 |
ETD [29] | 51.3 | 71.9 | 85.7 | 57.6 | 69.2 | 73.7 | 57.8 | 51.2 | 79.3 | 70.2 | 57.5 | 82.1 | 67.3 |
DSAN [25] | 54.4 | 70.8 | 75.4 | 60.4 | 67.8 | 68.0 | 62.6 | 55.9 | 78.5 | 73.8 | 60.6 | 83.1 | 67.6 |
DMP [30] | 52.3 | 73.0 | 77.3 | 64.3 | 72.0 | 71.8 | 63.6 | 52.7 | 78.5 | 72.0 | 57.7 | 81.6 | 68.1 |
DCAN [13] | 58.0 | 76.2 | 79.3 | 67.3 | 76.1 | 75.6 | 65.4 | 56.0 | 80.7 | 74.2 | 61.2 | 84.2 | 71.2 |
GPTKL (ours) | 58.4 | 77.9 | 80.6 | 67.8 | 76.9 | 77.2 | 67.4 | 57.9 | 82 | 75.3 | 61.5 | 85.1 | 72.3 |
±0.3 | ±0.4 | ±0.2 | ±0.5 | ±0.6 | ±0.2 | ±0.4 | ±0.5 | ±0.3 | ±0.2 | ±0.4 | ±0.5 |
Methods | Plane | Bcycl | Bus | Car | Horse | Knife | Mcyle | Person | Plant | Sktbrd | Train | Truck | Average |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ResNet [44] | 55.1 | 53.3 | 61.9 | 59.1 | 80.6 | 17.9 | 79.7 | 31.2 | 81.0 | 26.5 | 73.5 | 8.5 | 52.4 |
DAN [9] | 87.1 | 63.0 | 76.5 | 42.0 | 90.3 | 42.9 | 85.9 | 53.1 | 49.7 | 36.3 | 85.8 | 20.7 | 61.1 |
DANN [10] | 81.9 | 77.7 | 82.8 | 44.3 | 81.2 | 29.5 | 65.1 | 28.6 | 51.9 | 54.6 | 82.8 | 7.8 | 57.4 |
MCD [27] | 87.0 | 60.9 | 83.7 | 64.0 | 88.9 | 79.6 | 84.7 | 76.9 | 88.6 | 40.3 | 83.0 | 25.8 | 71.9 |
CDAN [12] | 85.2 | 66.9 | 83.0 | 50.8 | 84.2 | 74.9 | 88.1 | 74.5 | 83.4 | 76.0 | 81.9 | 38.0 | 73.7 |
GPDA [36] | 83.0 | 74.3 | 80.4 | 66.0 | 87.6 | 75.3 | 83.8 | 73.1 | 90.1 | 57.3 | 80.2 | 37.9 | 73.3 |
BSP+CDAN [19] | 92.4 | 61.0 | 81.0 | 57.5 | 89.0 | 80.6 | 90.1 | 77.0 | 84.2 | 77.9 | 82.1 | 38.4 | 75.9 |
SAFN [28] | 93.6 | 61.3 | 84.1 | 70.6 | 94.1 | 79.0 | 91.8 | 79.6 | 89.9 | 55.6 | 89.0 | 24.4 | 76.1 |
ETD [29] | 93.5 | 69.7 | 81.0 | 80.4 | 93.5 | 89.8 | 92.7 | 85.3 | 89.5 | 76.4 | 87.4 | 27.2 | 79.7 |
CTSN [46] | 92.3 | 65.1 | 84.2 | 68.4 | 90.4 | 61.2 | 92.3 | 74.1 | 88.7 | 66.9 | 82.3 | 19.4 | 75.4 |
DWL [20] | 90.7 | 80.2 | 86.1 | 67.6 | 92.4 | 81.5 | 86.8 | 78.0 | 90.6 | 57.1 | 85.6 | 28.7 | 77.1 |
DSAN [25] | 90.9 | 66.9 | 75.7 | 62.4 | 88.9 | 77.0 | 93.7 | 75.1 | 92.8 | 67.6 | 89.1 | 39.4 | 75.1 |
DMP [30] | 92.1 | 75.0 | 78.9 | 75.5 | 91.2 | 81.9 | 89.0 | 77.2 | 93.3 | 77.4 | 84.8 | 35.1 | 79.3 |
DCAN [13] | 94.9 | 83.7 | 75.7 | 56.5 | 92.9 | 86.8 | 83.8 | 76.5 | 88.4 | 81.6 | 84.2 | 51.1 | 79.7 |
GPTKL (ours) | 93.3 | 82.3 | 81.5 | 59.3 | 96.0 | 91.7 | 86.5 | 78.0 | 94.5 | 88.0 | 88.9 | 48.8 | 82.4 |
Transfer Kernel Learning | Mutual Information | S→M | C→A | P→C |
---|---|---|---|---|
× | × | 60.1 | 37.4 | 37.2 |
× | √ | 95.6 | 62.1 | 54.5 |
√ | × | 98.5 | 65.1 | 56.0 |
√ | √ | 99.1 | 67.8 | 57.9 |
Methods | S→M | C→A | P→C |
---|---|---|---|
GPDA | 35.88 | 31.5 | 55.42 |
DSAN | 22.65 | 14.61 | 24.82 |
GPTKL | 20.61 | 14.11 | 23.05 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Ge, P.; Sun, Y. Gaussian Process-Based Transfer Kernel Learning for Unsupervised Domain Adaptation. Mathematics 2023, 11, 4695. https://doi.org/10.3390/math11224695
Ge P, Sun Y. Gaussian Process-Based Transfer Kernel Learning for Unsupervised Domain Adaptation. Mathematics. 2023; 11(22):4695. https://doi.org/10.3390/math11224695
Chicago/Turabian StyleGe, Pengfei, and Yesen Sun. 2023. "Gaussian Process-Based Transfer Kernel Learning for Unsupervised Domain Adaptation" Mathematics 11, no. 22: 4695. https://doi.org/10.3390/math11224695
APA StyleGe, P., & Sun, Y. (2023). Gaussian Process-Based Transfer Kernel Learning for Unsupervised Domain Adaptation. Mathematics, 11(22), 4695. https://doi.org/10.3390/math11224695