Next Article in Journal
Research on Quadrant Detector Multi-Spot Position Detection Based on Orthogonal Frequency Division Multiplexing
Previous Article in Journal
Refined Landslide Susceptibility Mapping by Integrating the SHAP-CatBoost Model and InSAR Observations: A Case Study of Lishui, Southern China
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

An In-Depth Analysis of Domain Adaptation in Computer and Robotic Vision

by
Muhammad Hassan Tanveer
1,*,
Zainab Fatima
2,*,
Shehnila Zardari
2 and
David Guerra-Zubiaga
1
1
Department of Robotics & Mechatronics Engineering, Kennesaw State University, Marietta, GA 30060, USA
2
Department of Software Engineering, Ned University of Engineering & Technology, Karachi 75270, Pakistan
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2023, 13(23), 12823; https://doi.org/10.3390/app132312823
Submission received: 11 August 2023 / Revised: 19 October 2023 / Accepted: 16 November 2023 / Published: 29 November 2023

Abstract

:
This review article comprehensively delves into the rapidly evolving field of domain adaptation in computer and robotic vision. It offers a detailed technical analysis of the opportunities and challenges associated with this topic. Domain adaptation methods play a pivotal role in facilitating seamless knowledge transfer and enhancing the generalization capabilities of computer and robotic vision systems. Our methodology involves systematic data collection and preparation, followed by the application of diverse assessment metrics to evaluate the efficacy of domain adaptation strategies. This study assesses the effectiveness and versatility of conventional, deep learning-based, and hybrid domain adaptation techniques within the domains of computer and robotic vision. Through a cross-domain analysis, we scrutinize the performance of these approaches in different contexts, shedding light on their strengths and limitations. The findings gleaned from our evaluation of specific domains and models offer valuable insights for practical applications while reinforcing the validity of the proposed methodologies.

1. Introduction

1.1. Background and Scope

The success of cutting-edge algorithms and models in the fields of computer and robotic vision is heavily dependent on the availability of enormous and varied annotated datasets [1]. However, because of the difference between the source and target domains, applying these models to real-world circumstances frequently results in a severe performance hit. The domain shift problem is the term used to describe this occurrence [2]. The generalization capacities of computer and robotic vision systems have been improved through domain adaptation approaches, which have emerged as viable strategies to address this problem by allowing knowledge transfer across many domains [3].
Variations in data distribution, illumination, ambient settings, and sensor properties between the source and destination domains cause the domain shift issue [4]. Traditional deep learning-based models and computer vision algorithms are naturally vulnerable to such changes, demanding robust and adaptable approaches to provide consistent performance across many domains [5]. In this study, we explore the complexities of domain adaptation in computer and robotic vision to determine how well they work to solve the domain shift problem [6]. This research examines a wide variety of domain adaptation techniques, including time-tested methodologies and cutting-edge deep learning-based approaches, all of which are designed to address the difficulties provided by domain differences.

1.2. Objectives and Research Questions

This study’s major goal is to give a thorough review of domain adaptation methods as they apply to computer and robotic vision. We address the following research questions (RQ) to achieve this goal:
  • RQ1: What are the primary issues and causes of the domain shift issue in robotic and computer vision?
  • RQ2: What are the distinctive benefits provided by each category and how do traditional domain adaptation techniques and deep learning-based approaches vary from one another?
  • RQ3: What conclusions may be taken from domain adaptation methodologies’ cross-domain analysis when dealing with a variety of source and destination domains?
  • RQ4: How do the performance and generalization capabilities of computer and robotic vision systems, with the incorporation of domain adaptation techniques, differ from those of baseline models?
  • RQ5: What are the challenges and insights encountered during the process of domain adaptation for robotic and computer vision?
We seek to offer a thorough and technical understanding of domain adaptation in computer and robotic vision, supporting developments in these fields and paving the way for greater performance in real-world applications. To this end, we rigorously address these research issues.

1.3. Methodology and Articles Selection

A systematic and organized technique was used to perform this systematic literature review (SLR) on domain adaptation in computer and robotic vision, ensuring the selection of pertinent and excellent research publications [7,8]. To guarantee openness and repeatability, the SLR was carried out in accordance with the principles offered by and improved upon by Petersen et al. [9].

1.3.1. Data Collection and Preprocessing

To find pertinent research papers, a set of thorough search questions was created in the first phase of this SLR. Boolean operators were used to create the following search terms:
  • “Domain adaptation” AND “computer vision”;
  • “Domain adaptation” AND “robotic vision”;
  • “Domain adaptation” AND “visual domain transfer”;
  • “Domain shift” AND “computer vision”;
  • “Domain shift” AND “robotic vision”.
Several respected academic resources were searched using these queries, including IEEE, ACM Digital Library, SpringerLink, ScienceDirect, and Google Scholar. To include current developments on the subject, the search was limited to publications published between January 2019 and September 2023.
The following is a definition of the inclusion criteria for article selection:
  • Articles that explicitly discuss domain adaptation strategies in relation to computer and robotic vision;
  • English-language articles are selected to ensure accessibility and comprehension;
  • Publications that are accessible in full-text format as preprints, journal articles, or conference papers.
Exclusion criteria were established to omit research that was repetitive or irrelevant:
  • Articles that have nothing to do with visual domain transfer or domain adaptation in computer or robotic vision;
  • Articles written in languages other than English;
  • Articles with little information or without access to the full text.
A total of 264 pertinent articles were found after the search queries were run and the inclusion and exclusion criteria were used. Figure 1 provides a summary of the entire technique.

1.3.2. Evaluation Metrics

A systematic evaluation approach is used to rate the papers’ quality and applicability. To select which publications should be included in the SLR, authors looked at the titles and abstracts of the identified articles. Disagreements were resolved through conversation where possible. All authors worked equally and collaborated on every task. Selection is based on the use of transparent criteria, ensuring alignment with research objectives. The authors then moved on to a full-text analysis of the remaining papers after the initial screening. The significance of the experimental assessments, the applicability of the datasets employed, and the papers’ contributions to domain adaptation approaches in computer and robotic vision were considered throughout this round of review.
To ensure transparency and encourage the repeatability of the methods used, the SLR complies with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) criteria [10,11,12]. A thorough and academic basis for the examination and assessment of domain adaptation strategies in computer and robotic vision is provided by the choice and inclusion of pertinent publications [13].

1.3.3. Geographical Attribution of Research Articles

In this subsection, we elaborate on the methodology used to attribute geographical affiliations to the research articles included in our analysis. It is important to note that the process of determining the geographical source of research articles can be complex, especially in cases involving international collaboration or multi-region affiliations of authors [14]. We followed a set of principles to attribute geographical affiliations:
  • Primary author affiliation—In cases where the primary author of a research article had affiliations in multiple countries or regions, we attributed the article to the country or region corresponding to the primary author’s primary institutional affiliation;
  • Collaborative research—For articles involving collaboration among multiple countries or regions, we counted the article once for each collaborating country or region. This approach ensures that collaborative efforts are appropriately acknowledged within our regional analysis;
  • Inclusive approach—We adopted an inclusive approach that recognized the contributions of all regions involved in collaborative research. This approach aligns with our goal of providing a comprehensive overview of the geographical distribution of domain adaptation research;
  • While we made every effort to accurately attribute geographical affiliations, it is important to acknowledge that certain complexities, such as dual affiliations or author mobility, may not have been fully addressed. Additionally, author information may change over time, and our analysis represents a snapshot based on available data.
By disclosing our attribution principles and verification process, we aim to provide transparency and address potential concerns about the accuracy and validity of our regional analysis.

2. Domain Adaptation Techniques

Domain adaptation techniques play a pivotal role in addressing the domain shift problem encountered in computer and robotic vision [15]. These methods are designed to improve the generalization skills of vision models, enabling them to function well in situations outside the scope of their training data. In this section, we provide a detailed exploration of various domain adaptation techniques and their categorization. Each type offers unique strategies for mitigating domain shift challenges and improving model performance in diverse real-world applications.

2.1. Overview of Domain Adaptation Techniques

This section presents a a detailed exploration of domain adaptation strategies utilized in computer and robotic vision. With the use of these methods, domain shift issues may be overcome and knowledge transfer between various data distributions can be facilitated [16]. There are three types of domain adaptation techniques: conventional, deep learning-based, and hybrid. Traditional approaches, like Transfer Component Analysis (TCA) and Maximum Mean Discrepancy (MMD), concentrate on statistical feature space alignment, whereas deep learning-based approaches, such as Domain Adversarial Neural Networks (DANN) and CycleGAN, take advantage of neural networks to develop domain-invariant representations [17]. Traditional and deep learning algorithms are used in hybrid systems like DAN to take advantage of their complementary capabilities [17]. These complementary capabilities include circumstances where traditional methods furnish a stable foundation for aligning domains, imparting a reliable structural framework, while in parallel, deep learning techniques enhance this alignment by delving into the intricate, non-linear relationships present within the data. This synergy results in heightened robustness, particularly when confronted with challenges such as limited labeled data or noisy datasets.
These domain adaptation strategies have been shown to be quite effective in various applications of robotic and computer vision. For instance, when transferring from a synthetic domain to a real-world environment, domain adaptation strategies have increased accuracy in object identification tasks from 60% to 80% [17,18]. Additionally, domain adaptation approaches have demonstrated a 15% reduction in classification error in robotic vision scenarios when adapting to unfamiliar settings. Domain adaptation techniques are becoming increasingly useful in real-world situations, making them essential tools for enhancing the generalization capacities of computer and robotic vision systems [19]. These strategies enable the smooth deployment of vision models in a variety of real-world applications, including autonomous cars, healthcare systems, and industrial automation, by efficiently resolving the domain shift problem [20,21].
Additionally, the domain adaptation research in computer and robotic vision is dispersed across several geographical areas, indicating the worldwide relevance and interest of this discipline. It has received funding and support from different organizations and agencies, such as the National Science Foundation (NSF), Australian Research Council (ARC), the European Union (EU), and the National Natural Science Foundation of China (NSFC). To demonstrate this, Table 1 compares the scientific effort conducted in this field by region.
Table 1 demonstrates the importance of domain adaptation research in Asia, where a sizable number of the 180 publications were published. With 66 research articles, Europe follows suit, while North America and South America each contributed 104 and 46 publications. With 31 papers, Oceania also shows a notable contribution. The distribution of research throughout these areas shows a broad interest in and international cooperation on enhancing domain adaptation methods [22]. The cross-pollination of ideas encouraged by this geographical diversity enables academics to handle domain shift difficulties in a variety of environmental contexts and applications [23]. While Asia leads in terms of publications, Europe and North America are closely behind, indicating considerable participation in this developing sector. The domain adaptation area shows a wide range of study subjects and application domains in the research papers landscape.
It is important to note that the analysis presented in this paper is based on data from Google Trends, which reflect relative search interests and user queries. The regional concentration of interest and the geographical distribution of interest provide insights into the importance of certain issues on a worldwide scale. However, Google Trends data can be influenced by various factors, including population, language, and user search habits. Additionally, the attribution of research articles to specific regions may have inherent complexities in cases of international collaboration or multi-region affiliations of authors. The methodology used for geographical attribution should be considered when interpreting the results.
While the use of online data sources like Google Trends has faced criticism regarding reliability, it is essential to note that several reputable academic papers have been published in well-established and high-impact factors journals [24], as well as many others, emphasizing the credibility of these sources. The utilization of Google Trends data in these papers underpins its relevance and usefulness as a data source for rigorous research. This adoption within the academic community lends credence to the utility and acceptability of Google Trends as a viable data source for scholarly investigations.
Based on data from Google Trends, Figure 2 shows the regional interest in domain adaptation, computer vision, and robotic vision. China emerges as the region with the most interest in these sectors, as seen by the heat map, which shows the differing levels of interest across various locations [25,26]. The regional concentration of interest and the geographical distribution of interest provide important insights into the importance of certain issues on a worldwide scale.
Collaborations between academics from various locations have the potential to create improvements and creative answers to pressing problems as the subject of domain adaptation in computer and robotic vision continues to develop [27,28]. The combined knowledge and skill from these areas aid in the creation of effective domain adaptation approaches, enabling vision systems to succeed in a vast range of real-life tasks with different domain changes [29].

2.1.1. Scholarly Landscape Visualization

In this section, we present a visual representation of the scholarly landscape pertaining to our research focus on domain adaptation in computer vision. To provide a comprehensive understanding of the evolution and interconnectivity of research in this domain, we have utilized Connected Papers, a powerful tool for exploring and visualizing academic papers and their relationships. We have conducted searches for three key topics: “Domain adaptation for image segmentation”, “Domain adaptation for object identification”, and “Domain adaptation for object classification”. The graphs presented below in Figure 3, Figure 4 and Figure 5 and Table 2, Table 3, Table 4, Table 5 and Table 6 offer insights into the seminal works that laid the foundation for these topics (“prior papers”) and the subsequent research that has built upon these foundations (“derivative papers”). These visualizations serve as a valuable reference to contextualize our own contributions within the broader academic landscape of domain adaptation in computer vision.
Table 2, Table 4 and Table 6 show prior works, while Table 3, Table 5 and Table 7 shows derivative works related to semantic segmentation, object identification and object classification in the field of domain adaption, which are two important tasks in computer vision. The table includes the references, titles, last authors, years, citations, and graph citations of each work. The citations indicate how many times the work has been cited by other papers, while the graph citations indicate how many times the work has been cited by papers that also cite the current paper. These tables can be used to compare the impact and relevance of different works in the field.

Domain Adaptation for Image Segmentation

Prior works as shown in Table 2 are the research papers that were most commonly cited by the papers in the graph. This means that they are important seminal works for this field and it could be a good idea to get familiar with them.
In Table 2, the most frequently referenced work [30,31] enhanced performance on a variety of image recognition tasks by introducing a revolutionary design for deep neural networks. There are 23 graph citations and 141,511 citations in this work.
The most recent study [32] suggested a technique for knowledge transfer between domains by utilizing cycle-consistency losses and generative adversarial networks. This work has the most citations of any of the works in the table, with 2445 total citations and 29 graph citations. For four of the works in the table [33,34,35,36,37], 2016 is the most common year. These studies advanced semantic segmentation by offering fresh approaches, datasets, and standards.
In Table 3, the most recent works are from 2023, which have three entries in the table [40,41,49]. These works provide an overview of the state-of-the-art methods, challenges, and applications of domain adaptation and semantic segmentation in various domains. The most influential work is [41], which has 18 citations and 18 graph citations. This work provides a comprehensive survey of test time adaptation methods that aim to improve the generalization performance of models under distribution shifts. This work covers various aspects of test time adaptation, such as definitions, taxonomies, evaluation protocols, applications, and open challenges.

Domain Adaptation for Object Identification

In Table 4, the most cited work is [50], which introduced a novel architecture for deep neural networks that improved the performance on various image recognition tasks. This work has 141,511 citations and 33 graph citations. The most recent work is [51], which proposed an improved version of the YOLO (You Only Look Once) framework that achieved state-of-the-art results on several object detection benchmarks. This work has 7287 citations and 31 graph citations, which is impressive considering its recency. The most frequent last author is Ali Farhadi, who has three works in the table [52]. These works are part of the YOLO series, which pioneered the idea of using a single neural network to perform object detection in real time. The most common theme is region proposal networks (RPNs), which are a technique used to generate candidate regions for object detection. There are four works in the table that use RPNs [33,53,54]. These works improved the speed and accuracy of object detection by using RPNs in conjunction with other methods such as focal loss, feature pyramid networks, and multibox detectors.
From Table 5, it can be seen that the most cited work is [54], which provided a comprehensive survey of existing methods, datasets, and metrics for small object detection in UAV images. This work also proposed a new large-scale benchmark dataset called UAVDT that contains more than 80,000 images with 14 categories of objects. This work has 34 citations and five graph references. The most recent works are from 2023, which have six entries in the table [58,61,63,64,65,66]. These works introduced novel methods and models to improve the performance of small object detection in UAV images by addressing various challenges such as semi-supervised learning, refocusing, dual backbone network, density-aware scale adaptation, cross-layer feature aggregation, and occlusion-guided multi-task learning. The most common theme is YOLO (You Only Look Once), which is a framework that uses a single neural network to perform object detection in real time. There are two works in the table that use YOLO as their base model [63,67]. These works modified and improved the YOLO framework by using a dual backbone network and a vectorized IOU metric to enhance the accuracy and speed of small object detection in UAV images.

Domain Adaption for Object Classification

In Table 6, the most cited work is [69,70], which introduced a novel architecture for deep neural networks that improved the performance on various image recognition tasks. This work has 141,511 citations and 29 graph citations. The most recent work is [71], which proposed a new regularization technique that uses attention maps to selectively drop out features that are irrelevant for object localization. This work has 277 citations and 37 graph citations. The most frequent last first author is Xiaolin Zhang, who has two works in the table [72]. These works developed new methods to generate complementary and self-produced guidance signals for WSOL using adversarial learning and self-attention mechanisms. The most common theme is activation-based methods, which are a type of WSOL methods that use the activation maps of convolutional layers to infer the object locations. There are six works in the table that use activation-based methods [73,74]. These works improved the activation-based methods by using different strategies, such as discriminative learning, complementary learning, self-produced guidance, hide-and-seek, divergent activation, and attention-based dropout.
In Table 7, the most recent works are from 2023, which have five entries in the table [77,79,81,83]. These works introduced novel methods and models to improve the performance of WSOL by using generative prompts, open-world settings, causal knowledge distillation, multi-modal class-specific tokens, and rethinking the localization process. The most cited work is [83], which provided a comprehensive survey of existing methods, datasets, and metrics for WSOL. This work also discussed the challenges and future directions of WSOL research. This work has 39 citations and eight graph citations.
The most frequent first author is Lian Xu, who has two works in the table [84]. These works developed new methods to use multi-modal class-specific tokens and multi-class token transformers to perform WSOL and semantic segmentation tasks. The most common theme is transformer-based methods, which are types of WSOL methods that use transformer networks to model the global dependencies and attention mechanisms among image features. There are three works in the table that use transformer-based methods [85]. These works improved the transformer-based methods by using multi-modal tokens, multi-class tokens, and adversarial learning techniques.

2.1.2. Traditional Domain Adaptation Methods

Traditional domain adaptation techniques match the source and target domains’ feature spaces using statistical techniques. TCA, which intends to narrow the dispersion mismatch between domains by mapping the data onto a common latent space, is one extensively utilized approach. TCA has been used to effectively align feature distributions and enhance model performance in computer vision applications like object recognition [86].
MMD, which quantifies the difference between the means of the source and target data in a replicating kernel Hilbert space, is another well-liked technique. In terms of domain adaptation for image classification tasks, MMD has yielded encouraging results [87]. MDD uses a multi-domain discriminator to train models to learn domain-invariant features across diverse data sources, enabling better generalization to new domains. MMD-DA (Maximum Mean Discrepancy—Domain Adaptation) extends MCD by adding domain adaptation techniques to enhance feature distribution alignment, which is beneficial when you have labeled source data and unlabeled target data.

2.1.3. Deep Learning-Based Methods

Methods for domain adaptation based on deep learning make use of neural networks’ ability to learn representations that are independent of the source domain. A well-known method is DANN, which integrates a domain discriminator into the network to learn features that are domain-invariant. By successfully decreasing domain differences, in cross-domain picture classification tasks, DANN has attained cutting-edge performance [88].
Another popular deep learning-based domain adaptation strategy that was created with image-to-image translation problems in mind is CycleGAN. It is adaptable and relevant to many picture domain adaptation scenarios since it learns a mapping between source and target domains without the necessity for paired data [89].
Another such model is ADDA (Adversarial Discriminative Domain Adaptation), as shown in Figure 6, which employs an adversarial discriminator for better domain adaptation by aligning feature distributions between the source and destination domains [90]. MADA (Multi-Adversarial Domain Adaptation) employs a multi-adversarial discriminator to enhance domain adaptation across diverse domains, resulting in improved overall performance. It excels particularly in scenarios involving multiple source domains, ensuring the effective alignment of feature distributions and better adaptation outcomes [91].
MUNIT (Multi-Modal Unsupervised Image-to-Image Translation) strengthens domain adaptation by introducing the concept of multiple methods for unsupervised image-to-image translation. This innovation broadens its applicability, allowing it to deal with a wide range of adaptation tasks with flexibility and efficiency [92,93].
A typical deep learning-based domain adaptable model’s mechanism is shown in Figure 7.
General operative is mechanism is based upon transfer learning. With 66 research articles, Europe follows suit, while North America and South America each contributed 104 and 46 publications. With 31 papers, Oceania also shows a notable contribution. The distribution of research throughout these areas shows a broad interest in and international cooperation on enhancing domain adaptation methods [93] and the mechanism of domain adaptation.

2.1.4. Hybrid Methods

To make use of each method’s advantages, hybrid domain adaptation strategies incorporate both conventional and deep learning-based approaches. Deep Adaptation Networks (DAN) represent one such method that combines deep neural networks with multiple kernels learning to provide efficient domain adaptation [94]. DAN has been used for a variety of computer vision applications and has been proven to perform better under circumstances including domain adaptability. Another such efficient hybrid model is CDAN-SA-MTL (Conditional Domain Adversarial Network with Self-Attention and Multi-Task Learning), which combines self-attention and multi-task learning strategies to improve domain adaptation. By simultaneously considering multiple tasks and utilizing self-attention mechanisms, it achieves enhanced adaptation results and robustness in scenarios involving domain variations [95].
DANN-SA-MTL (Domain Adversarial Neural Network with Self-Attention and Multi-Task Learning) integrates self-attention and multi-task learning into the domain adaptation process. This approach boosts model performance by incorporating self-attention mechanisms for improved feature extraction and multi-task learning to handle diverse adaptation tasks, making it a versatile choice for domain adaptation challenges [96].

2.2. Evaluation of Domain Adaptation Techniques

In this section, we perform a thorough analysis of domain adaptation strategies used in robotic and computer vision. Three tables are used to illustrate the results as we conduct a comparative examination of the efficacy and applicability of various state-of-the-art strategies. Table 8 highlights the accuracy, domains, contributions, benefits, and drawbacks of major domain adaptation models and technologies. Traditional transfer learning approaches are consistently outperformed by adversarial techniques, such as ADDA and its variants, which give better performance but at the expense of added complexity. Models like CDAN and MADA perform better on smaller datasets, making them appropriate for use in situations with little or no labeled data. Model accuracy is improved, but model complexity is raised by the introduction of approaches like importance reweighting, multi-task learning, and maximum mean discrepancy loss. For visual translation and adaptation, generative models like CoGAN and MUNIT provide promising outcomes that outperform conventional approaches. Models like DANN, which were early adopters of adversarial training in domain adaptation and are capable of handling significant domain shifts, are recognized for their hyperparameter sensitivity.
Table 9 gives a summary of various publications on domain adaptation, emphasizing their merits and contributions. These papers offer a variety of strategies for enhancing model efficacy in domain adaptation challenges. Contributions include the introduction of adversarial strategies, generative models, multi-task learning, and other combinations of these techniques. Benefits include the better handling of complex interactions, realistic image generation, adaptability to new domains, and better use of unlabeled data.
Table 9 presents a comprehensive overview of research sources focused on domain adaptation, offering several noteworthy conclusions and insights. One notable finding is the diverse range of innovative techniques introduced by researchers, including adversarial learning, generative adversarial networks, meta-learning, and self-supervised learning. These approaches exhibit significant promise in enhancing performance across various tasks and domains, as exemplified by [161], which introduces a multi-task learning framework for domain adaptation. Furthermore, certain models, such as [162], illustrate their potential by generating realistic target domain images, while [163] highlights the importance of learning invariant features for effective adaptation. Additionally, combining different techniques, as demonstrated in [164], with ensemble learning and self-supervised learning, can further contribute to improved domain adaptation performance. The table shows the variety of research projects, from object detection in industrial automation to semantic segmentation in autonomous cars [165]. The studies cover a wide range of topics, demonstrating the pervasive interest in and importance of domain adaptation in several practical applications [166].
Overall, this table underscores the diversity of strategies and models developed by researchers to address domain adaptation challenges, providing valuable insights into the field’s advancements and potential avenues for future research.
Table 9. Systematic comparison of the literature published on the use of domain adaptation in the field of computer and robotic vision.
Table 9. Systematic comparison of the literature published on the use of domain adaptation in the field of computer and robotic vision.
PaperContributionAdvantages
[161]Introduces a new multi-task learning framework for domain adaptationCan improve performance on multiple tasks
[162]Presents a model that works upon adversarial conditional image synthesisCan improve performance by generating realistic images from the target domain
[167]Presents a model that works upon adversarial conditional variational autoencodersCan improve performance by using adversarial conditional variational autoencoders to learn representations that are invariant to domain shift
[163]Introduces a new adversarial training framework for domain adaptation that uses an ensemble of discriminatorsCan improve performance by combining multiple discriminators
[164]Presents a model that works upon adversarial training and meta-learningCan improve performance by using adversarial training and meta-learning to learn a model that can generalize to new domains
[165]Introduces a new method for domain adaptation that combines adversarial training and self-supervised learningCan improve performance by learning features that are invariant to domain shift and using self-supervised learning to learn representations that are transferable to new domains
[166]Introduces a new conditional generative adversarial network framework for domain adaptationCan improve performance by using conditional generative models
[168]Presents a model that works upon conditional generative adversarial networksCan improve performance by using conditional generative adversarial networks to generate realistic images from the target domain
[169]Presents a model that works upon conditional Wasserstein generative adversarial networksCan improve performance by generating realistic images from the target domain
[170]Presents a model that works upon domain-invariant feature extractionCan improve performance by using domain-invariant feature extraction to learn features that are invariant to domain shift
[171]Introduces a new method for domain adaptation that combines ensemble learning and self-supervised learningCan improve performance by combining multiple models and using self-supervised learning to learn features that are transferable to new domains
[172]Presents a model that works upon ensemble learning and transfer learningCan improve performance by using ensemble learning and transfer learning to learn a model that can generalize to new domains
[173]Presents a model that works upon few-shot learning and meta-learningCan improve performance by using few-shot learning to learn a model that can generalize to new domains and by using meta-learning to adapt to new domains more quickly
[174]Presents a model that works upon few-shot learning and meta-learningCan improve performance by using few-shot learning and meta-learning to learn a model that can generalize to new domains
[175]Presents a model that works upon generative adversarial networks for semi-supervised learningCan improve performance by learning from both labeled and unlabeled data
[176]Introduces a new method for learning invariant features for domain adaptationCan improve performance by learning representations that are invariant to domain shift
[177]Introduces a new method for domain adaptation that combines invariant representations and self-supervised learningCan improve performance by learning representations that are invariant to domain shift and using self-supervised learning to learn features that are transferable to new domains
[178]Presents a model that works upon meta-learning for transferable featuresCan improve performance by learning features that are transferable to new domains
[179]Presents a model that works upon multi-task learning and attentionCan improve performance by using multi-task learning and attention to learn representations that are invariant to domain shift
[180]Presents a model that works upon multi-task learning for few-shot image classificationCan improve performance by learning multiple tasks with few examples
[181]Presents a model that works upon patch-level self-supervised learningCan improve performance by using patch-level self-supervised learning to learn features that are invariant to domain shift
[182]Presents a model that works upon self-supervised contrastive learningCan improve performance by learning representations that are invariant to domain shift
[183]Presents a model that works upon self-supervised contrastive learningCan improve performance by learning representations that are invariant to domain shift
[184]Presents a model that works upon self-supervised learning and synthetic dataCan improve performance by using self-supervised learning to learn features that are invariant to domain shift and by generating synthetic data that are like the target domain
[185]Presents a model that works upon synthetic data and domain-invariant feature aggregationCan improve performance by generating synthetic data that are like the target domain and by aggregating features from multiple domains
[186]Presents a model that works upon synthetic data and self-supervised learningCan improve performance by using synthetic data and self-supervised learning to learn features that are invariant to domain shift
[187]Introduces a new Wasserstein adversarial training framework for domain adaptationCan improve performance by using Wasserstein distance
[188]Presents a model that works upon adversarial learning and meta-learningCan improve performance by using adversarial learning and meta-learning to learn a model that can generalize to new domains
[189]Presents a model that works upon adversarial learning and meta-learningCan improve performance by using adversarial learning and meta-learning to learn a model that can generalize to new domains
[190]Presents a model that works upon adversarial learning and self-supervised learningCan improve performance by using adversarial learning and self-supervised learning to learn a model that can generalize to new domains
[191]Presents a model that works upon adversarial learning and transfer learningCan improve performance by using adversarial learning and transfer learning to learn a model that can generalize to new domains
[192]Presents a model that works upon adversarial multi-agent reinforcement learningCan improve performance by using adversarial multi-agent reinforcement learning to learn a model that can generalize to new domains
[193]Presents a model that works upon adversarial training and distillationCan improve performance by using adversarial training and distillation to learn a model that can generalize to new domains
[194]Presents a model that works upon adversarial training and self-supervised learningCan improve performance by using adversarial training and self-supervised learning to learn a model that can generalize to new domains
[195]Presents a model that works upon attention and normalizationCan improve performance by using attention and normalization to learn representations that are invariant to domain shift
[196]Presents a model that works upon conditional generative adversarial networks and CycleGANCan improve performance by using conditional generative adversarial networks and CycleGAN to generate realistic images from the target domain
[197]Presents a model that works upon ensemble learning and domain adaptationCan improve performance by using ensemble learning and domain adaptation to learn a model that can generalize to new domains
[198]Presents a model that works upon few-shot learning and data augmentationCan improve performance by using few-shot learning and data augmentation to learn a model that can generalize to new domains
[199]Presents a model that works upon few-shot learning and interpolationCan improve performance by using few-shot learning and interpolation to learn a model that can generalize to new domains
[200]Presents a model that works upon few-shot learning and self-supervised learningCan improve performance by using few-shot learning and self-supervised learning to learn a model that can generalize to new domains
[201]Presents a model that works upon generative adversarial networks and self-supervised learningCan improve performance by using generative adversarial networks and self-supervised learning to learn features that are invariant to domain shift
[202]Presents a model that works upon graph neural networksCan improve performance by using graph neural networks to learn representations that are invariant to domain shift
[203]Presents a model that works upon invariant feature extraction and distillationCan improve performance by using invariant feature extraction and distillation to learn a model that can generalize to new domains
[204]Presents a model that works upon invariant feature extraction and interpolationCan improve performance by using invariant feature extraction and interpolation to learn features that are invariant to domain shift
[205]Presents a model that works upon meta-learning and importance reweightingCan improve performance by using meta-learning and importance reweighting to learn a model that can generalize to new domains
[206]Presents a model that works upon multi-task learning and cross-domain augmentationCan improve performance by using multi-task learning and cross-domain augmentation to learn a model that can generalize to new domains
[207]Presents a model that works upon multi-task learning and domain-invariant feature extractionCan improve performance by using multi-task learning and domain-invariant feature extraction to learn features that are invariant to domain shift
[208]Presents a model that works upon self-attention and conditional generative adversarial networksCan improve performance by using self-attention and conditional generative adversarial networks to learn representations that are invariant to domain shift
[209]Presents a model that works upon self-supervised contrastive learningCan improve performance by using self-supervised contrastive learning to learn features that are invariant to domain shift
[210]Presents a model that works upon self-supervised contrastive learningCan improve performance by using self-supervised contrastive learning to learn representations that are invariant to domain shift
[211]Presents a model that works upon self-supervised learning and transfer learningCan improve performance by using self-supervised learning and transfer learning to learn a model that can generalize to new domains
[212]Presents a model that works upon synthetic data and transfer learningCan improve performance by using synthetic data and transfer learning to learn features that are invariant to domain shift
[213]Presents a model that works upon temporal ensemblingCan improve performance by using temporal ensembling to learn a model that can generalize to new domains
[214]Presents a model that works upon Wasserstein autoencodersCan improve performance by using Wasserstein autoencoders to learn representations that are invariant to domain shift
[215]Presents a model that works upon data augmentation and self-supervised learningCan improve performance by using data augmentation and self-supervised learning to learn features that are invariant to domain shift
[216]Presents a model that works upon adversarial learning and contrastive learningCan improve performance by using adversarial learning and contrastive learning to learn features that are invariant to domain shift
[217]Presents a model that works upon invariant feature extraction and distillationCan improve performance by using invariant feature extraction and distillation to learn features that are invariant to domain shift
[218]Presents a model that works upon semi-supervised learning and transfer learningCan improve performance by using semi-supervised learning and transfer learning to learn a model that can generalize to new domains
[219]Presents a model that works upon few-shot learning and invariant feature extractionCan improve performance by using few-shot learning and invariant feature extraction to learn features that are invariant to domain shift
[220]Presents a model that works upon adversarial learning and meta-learningCan improve performance by using adversarial learning and meta-learning to learn a model that can generalize to new domains
[221]Presents a model that works upon self-supervised learning and generative adversarial networksCan improve performance by using self-supervised learning and generative adversarial networks to learn features that are invariant to domain shift
[222]Presents a model that works upon adversarial learning and transfer learningCan improve performance by using adversarial learning and transfer learning to learn a model that can generalize to new domains
[223]Presents a model that works upon ensemble learning and meta-learningCan improve performance by using ensemble learning and meta-learning to learn a model that can generalize to new domains
[224]Presents a model that works upon adversarial learning and invariant representationsCan improve performance by using adversarial learning and invariant representations to learn features that are invariant to domain shift
[225]Presents a model that works upon attention and self-supervised learningCan improve performance by using attention and self-supervised learning to learn features that are invariant to domain shift
[226]Presents a model that works upon conditional domain adversarial networks and self-supervised learningCan improve performance by using conditional domain adversarial networks and self-supervised learning to learn features that are invariant to domain shift
[227]Presents a model that works upon meta-learning and transfer learningCan improve performance by using meta-learning and transfer learning to learn a model that can generalize to new domains
[228]Presents a model that works upon few-shot learning and adversarial learningCan improve performance by using few-shot learning and adversarial learning to learn a model that can generalize to new domains
[229]Presents a model that works upon synthetic data and meta-learningCan improve performance by using synthetic data and meta-learning to learn a model that can generalize to new domains
[230]Presents a model that works upon adversarial learning and multi-task learningCan improve performance by using adversarial learning and multi-task learning to learn a model that can generalize to new domains
[231]Presents a model that works upon conditional generative adversarial networks and adversarial learningCan improve performance by using conditional generative adversarial networks and adversarial learning to learn features that are invariant to domain shift
[232]Presents a model that works upon ensemble learning and self-supervised learningCan improve performance by using ensemble learning and self-supervised learning to learn a model that can generalize to new domains
[233]Introduces a new method for domain adaptation that learns multi-domain invariant representationsCan improve performance by learning representations that are invariant to multiple domains
[234]Presents a model that works upon self-supervised learning and conditional adversarial networksCan improve performance by using self-supervised learning and conditional adversarial networks to learn features that are invariant to domain shift
[235]Presents a model that works upon adversarial learning and few-shot learningCan improve performance by using adversarial learning and few-shot learning to learn a model that can generalize to new domains
[236]Presents a model that works upon domain-invariant feature aggregation and self-supervised learningCan improve performance by using domain-invariant feature aggregation and self-supervised learning to learn features that are invariant to domain shift
[237]Presents a model that works upon generative adversarial networks and meta-learningCan improve performance by using generative adversarial networks and meta-learning to learn features that are invariant to domain shift
[238]Presents a model that works upon adversarial learning and synthetic dataCan improve performance by using adversarial learning and synthetic data to learn features that are invariant to domain shift
[239]Presents a model that works upon multi-task learning and meta-learningCan improve performance by using multi-task learning and meta-learning to learn a model that can generalize to new domains
[240]Presents a model that works upon ensemble learning and transfer learningCan improve performance by using ensemble learning and transfer learning to learn a model that can generalize to new domains
[241]Presents a model that works upon adversarial learning and self-supervised learningCan improve performance by using adversarial learning and self-supervised learning to learn a model that can generalize to new domains
[242]Uses conditional generative adversarial networks for domain adaptationCan generate realistic images from the target domain
[243]Combines distillation and semi-supervised learning for domain adaptationCan improve performance by using both techniques
[244]Uses graph neural networks for domain adaptationCan better handle complex relationships between features
[245]Combines meta-learning and adversarial training for domain adaptationCan improve performance by learning to adapt to new domains
[246]Combines multi-task learning and distillation for domain adaptationCan improve performance by using both techniques
[247]Uses reinforcement learning for domain adaptationCan improve performance by learning to adapt to new domains
[248]Uses self-supervised learning for domain adaptationCan improve performance by using unlabeled data from the target domain
[249]Introduces a method for adversarial learning with Wasserstein distance for domain adaptationCan improve performance by using a more robust distance metric
[250]Introduces a co-training framework for domain adaptationCan improve performance by using multiple learners
[251]Introduces a method for learning to correct feature distributions for domain adaptationCan improve performance by explicitly modeling the domain shift
[252]Combines adversarial training and data augmentation for domain adaptationCan improve performance by using both techniques
[253]Introduces a new adversarial training framework for domain adaptationCan improve performance by using virtual adversarial examples
[254]Introduces a Bayesian deep learning framework for domain adaptationCan better handle uncertainty in the data
[255]Introduces a new conditional Wasserstein GAN framework for domain adaptationCan improve performance by using conditional generative models
[256]Presents a model that works upon domain-invariant feature aggregationCan improve performance by aggregating features from multiple domains
[257]Presents a model that works upon few-shot learningCan improve performance by using few-shot learning to learn a model that can generalize to new domains
[258]Introduces a generative adversarial network for domain adaptation for person re-identificationCan generate realistic images from the target domain
[259]Introduces a graph neural network for domain adaptationCan better handle complex relationships between data points
[260]Introduces a hierarchical self-attention mechanism for domain adaptationCan better handle complex relationships between features
[261]Introduces a new method for learning invariant representations for domain adaptationCan improve performance by learning representations that are invariant to domain shift
[262]Introduces a reinforcement learning framework for domain adaptationCan learn to adapt to new domains in an online setting
[263]Presents a model that works upon self-supervised learningCan improve performance by using self-supervised learning to learn features that are invariant to domain shift
[264]Presents a model that works upon synthetic dataCan improve performance by generating synthetic data that are like the target domain
[265]Introduces a variational autoencoder for domain adaptationCan generate realistic images from the target domain
[266]Introduces a Wasserstein distance for domain adaptationCan better handle data with large domain shifts
[267]Introduces an ensemble domain adaptation frameworkCan improve performance by combining multiple models
[268]Introduces a joint distribution adaptation framework for domain adaptationCan improve performance by jointly adapting the source and target distributions
[269]Introduces a meta-learning framework for domain adaptationCan improve performance by learning to adapt to new domains
[270]Introduces a transfer learning framework for domain adaptationCan improve performance by using a pre-trained model
[271]Introduces a deep generative model for domain adaptationCan generate realistic images from the target domain
[272]Introduces an importance reweighting mechanism for domain adaptationCan better handle imbalanced datasets
[273]Introduces a domain adaptation method based on meta-learningCan adapt to new domains quickly
[274]Introduces a multi-task learning framework for domain adaptationCan improve performance on multiple tasks
[275]Introduces a reversal gradient technique for domain adaptationCan better handle data with large domain shifts
[276]Introduces a domain adaptation method based on self-supervised learningCan be used with unlabeled data from the target domain
Finally, the examination of publication patterns and surveys (Table 10) shows that domain adaptation research is gaining popularity around the globe. The comprehensive survey by [277,278] provides a valuable overview of transfer learning, domain adaptation, and multi-source learning methods, emphasizing their interconnectedness. Ref. [279] delves into domain adaptation methods for computer vision, showcasing instance-based, feature-based, decision-based, generative, and meta-learning methods, while addressing challenges and future directions. For recent advances in visual recognition, ref. [280] serves as a resource, discussing key methods and directions in domain adaptation. Furthermore, refs. [281,282] contributes with a systematic literature review in computer vision domain adaptation, identifying vital research topics [283]. These studies highlight the potential of domain adaptation across diverse domains, including object detection in video [284], face recognition [285], medical image analysis [286], natural language processing [287], robotics [288,289], 3D vision [290,291,292], etc. Additionally, specialized strategies like multi-task learning [293,294,295] and transfer learning [296] demonstrate their capability to achieve state-of-the-art performance across various visual recognition tasks and learn domain-invariant representations.
The historical trends show an increase in the volume of research articles published over time, highlighting the continued importance and relevance of domain adaptation strategies in the field of computer and robotic vision [297]. In conclusion, the analysis of domain adaptation strategies demonstrates the critical importance of deep learning-based methodologies, the diversity of research articles, and the widespread interest in this area [298]. These findings offer insightful information for academics, professionals, and decision-makers, driving the creation of stronger and more effective domain adaptation methods to handle the difficulties of practical vision applications.
Table 10. Publication surveys and trends.
Table 10. Publication surveys and trends.
PaperYearScopeMethodsContributions
[278]2019A comprehensive survey of transfer learning, domain adaptation, and multi-source learning methodsTransfer learning, domain adaptation, and multi-source learning methods
-
Provides a comprehensive overview of the three related fields of transfer learning, domain adaptation, and multi-source learning.
-
Discusses the challenges and future directions of each field.
[279]2020A comprehensive survey of domain adaptation methods for computer visionInstance-based, feature-based, decision-based, generative, and meta-learning methods
-
Provides a comprehensive overview of the field of domain adaptation in computer vision.
-
Discusses the challenges and future directions of domain adaptation in computer vision.
[280]2021A survey of recent advances in domain adaptation for visual recognitionInstance-based, feature-based, decision-based, generative, and meta-learning methods
-
Discusses recent advances in domain adaptation for visual recognition.
-
Provides an overview of the challenges and future directions of domain adaptation for visual recognition.
[299]2022A survey of recent advances in domain adaptation for computer visionInstance-based, feature-based, decision-based, generative, and meta-learning methods
-
Discusses recent advances in domain adaptation for computer vision.
-
Provides an overview of the challenges and future directions of domain adaptation for computer vision.
[281]2022A systematic literature review of domain adaptation methods for computer visionInstance-based, feature-based, decision-based, generative, and meta-learning methods
-
Conducts a systematic literature review of domain adaptation methods for computer vision.
-
Identifies the most important research topics in domain adaptation for computer vision.
-
Provides an overview of the challenges and future directions of domain adaptation for computer vision.
[300]2022A review of domain adaptation methods for computer visionInstance-based, feature-based, decision-based, generative, and meta-learning methods
-
Provides a review of domain adaptation methods for computer vision.
-
Discusses the applications of domain adaptation in computer vision.
-
Provides an overview of the challenges and future directions of domain adaptation for computer vision.
[284]2019A study of domain adaptation for object detection in videoInstance-based, feature-based, decision-based, generative, and meta-learning methods
-
Can improve the performance of object detection in videos with limited labeled data.
-
Can be used for both supervised and unsupervised domain adaptation.
[285]2020A study of domain adaptation for face recognitionInstance-based, feature-based, decision-based, generative, and meta-learning methods
-
Can improve the performance of face recognition in new domains with limited labeled data.
-
Can be used for both supervised and unsupervised domain adaptation.
[286]2021A study of domain adaptation for medical image analysisInstance-based, feature-based, decision-based, generative, and meta-learning methods
-
Can improve the performance of medical image analysis in new domains with limited labeled data.
-
Can be used for both supervised and unsupervised domain adaptation.
[287]2022A study of domain adaptation for natural language processingInstance-based, feature-based, decision-based, generative, and meta-learning methods
-
Can improve the performance of natural language processing in new domains with limited labeled data.
-
Can be used for both supervised and unsupervised domain adaptation.
[288]2022A study of domain adaptation for roboticsInstance-based, feature-based, decision-based, generative, and meta-learning methods
-
Can improve the performance of robots in new environments with limited labeled data.
-
Can be used for both supervised and unsupervised domain adaptation.
[289]2022A study of domain adaptation for 3D visionInstance-based, feature-based, decision-based, generative, and meta-learning methods
-
Can improve the performance of 3D vision algorithms in new environments with limited labeled data.
-
Can be used for both supervised and unsupervised domain adaptation.
[301]2019A study of multi-task learning for domain adaptationMulti-task learning
-
Can achieve state-of-the-art performance on a variety of visual recognition tasks.
-
Can be used to learn domain-invariant representations.
[295]2020A study of transfer learning for domain adaptationTransfer learning
-
Can achieve state-of-the-art performance on a variety of visual recognition tasks.
-
Can be used to learn domain-invariant representations.
[296]2019A study of self-supervised learning for domain adaptatinSelf-supervised learning
-
Can achieve state-of-the-art performance on a variety of visual recognition tasks.
-
Can be used to learn domain-invariant representations.

2.3. Performance Metrics Comparison

We use a variety of performance criteria, like accuracy, precision, recall, and F1-score, to assess the efficacy of domain adaptation approaches. These measures are essential gauges of how well the models can deal with domain shift issues and achieve strong generalization across various data distributions [301].
The results show that, for a variety of computer and robotic vision tasks, deep learning-based approaches, such as DANN and CycleGAN, consistently outperform more established techniques, such as TCA and MMD. DANN and CycleGAN successfully learn domain-invariant feature representations by using the strength of deep neural networks, resulting in appreciable performance gains [295]. For instance, DANN exhibits an average accuracy gain of around 15% compared to TCA and MMD in object identification tasks using cross-domain picture datasets [296]. This demonstrates how effective deep learning-based approaches are in overcoming domain shift difficulties and achieving improved generalization [302].
Additionally, DANN outperforms conventional approaches in the accuracy and recall analyses of domain adaptation strategies in object detection tasks. Comparing DANN to TCA and MMD, there is an average 10% gain in precision and an 8% improvement in recall [303]. Such enhancements demonstrate DANN’s capacity to precisely recognize and recall items, even under conditions with substantial domain variance. Further highlighting the supremacy of deep learning-based approaches is the F1-score study [304]. In comparison to conventional techniques, DANN yields a significant F1-score improvement of 12%, demonstrating its effectiveness in achieving a balance between accuracy and recall, even in highly dynamic and complicated robotic vision environments [305].

2.4. Challenges and Insights from Cross-Domain Analysis

While deep learning-based domain adaptation approaches show impressive performance gains, our study reveals several issues and insights that call for consideration in the search for more developments.
The choice of acceptable target domains presents a major problem. To develop representations that are domain-invariant, deep learning-based techniques largely rely on target domain data [306]. As a result, the performance of the models’ generalization and adaptation depends greatly on the choice of target domains. To achieve successful domain adaptation, it becomes essential to make sure that the target domain data appropriately depict real-world circumstances. Another difficulty is presented by the intricacy of robotic vision tasks [297]. The adoption process must be quick and effective in situations when robotic vision necessitates making decisions in real time. Deep learning-based methods sometimes need a lot of computing power and can lengthen inference times. There are still ongoing studies on how to solve these computational problems accurately [298].
The requirement for domain-specific adjustments is highlighted by insights from the cross-domain study. While deep learning-based systems succeed at some tasks, there are instances where more conventional approaches, like TCA and MMD, show comparable efficacy [307]. A fascinating direction for future study is the incorporation of a hybrid strategy that benefits from the advantages of both conventional and deep learning-based approaches. The performance of deep learning-based models is also substantially impacted by the availability of annotated datasets for the target domains [308]. Unsupervised or weakly supervised domain adaptation approaches might be useful alternatives when labeled target domain data are difficult to get or are expensive [309].
Overall, a landscape of potential and problems is revealed by the comparative comparison of domain adaptation strategies in computer and robotic vision. The domain-specific nature of the tasks and computational concerns call for deliberate modifications and multidisciplinary cooperation, even though deep learning-based approaches show considerable promise [310]. The knowledge gathered from this analysis will help researchers and professionals navigate the difficulties of domain adaptation and encourage the creation of more reliable and effective vision systems for practical applications [311].

3. Applications and Real-World Scenarios

In this section, we provide practical applications of domain adaptation strategies within the domains of computer and robotic vision. We aim to shed light on the transformative impact of these techniques, illustrating their potential to enhance the adaptability and functionality of vision systems in real-world scenarios [312].

3.1. Domain Adaptation in Computer Vision: Real-World Applications

3.1.1. Autonomous Driving Systems

One noteworthy arena where domain adaptation proves invaluable is in the development of autonomous driving systems. Here, vision-based perception is a linchpin for safe navigation in dynamic and unpredictable environments. Domain adaptation methodologies empower autonomous vehicles to maintain exceptional accuracy in tasks such as object identification, lane segmentation, and pedestrian recognition, even amidst diverse weather conditions, fluctuations in illumination, and changing road geometries. For instance, deep learning-based domain adaptation models facilitate seamless adaptation to shifting weather conditions, enabling autonomous vehicles to effectively detect and respond to critical elements such as pedestrians and obstacles, even in challenging weather conditions like rain or fog, by training on comprehensive datasets encompassing a spectrum of weather scenarios [313]. Figure 8 shows how the domain gap is visible in the acquired dataset due to changes in weather.

3.1.2. Medical Imaging and Diagnosis

The medical industry also reaps substantial benefits from domain adaptation techniques, particularly in the realm of computer vision-based diagnostic tools. Domain adaptation plays a pivotal role in ensuring the accuracy and reliability of medical image analysis and diagnosis by adjusting models to account for variations in imaging modalities, technology, and patient demographics. For instance, domain adaptation allows for knowledge transfer from well-annotated datasets at one medical institution to datasets with fewer labeled examples at another. This approach significantly enhances classification accuracy in medical image analysis, facilitating early disease diagnosis and the development of personalized treatment plans through the harmonization of feature distributions across multiple datasets [314].

3.1.3. Surveillance and Security

In the domain of surveillance and security, real-time monitoring and threat detection heavily rely on computer vision technology. Domain adaptation algorithms enable surveillance systems to dynamically adapt to changing monitoring settings, ensuring the precise and timely detection of suspicious activities and objects [315]. Models can flexibly adjust to alterations in camera angles, lighting conditions, and other environmental factors, thereby maintaining a high level of accuracy in identifying abnormal behavior and potential security threats across diverse surveillance scenarios through the utilization of domain-invariant features.

3.2. Domain Adaptation in Robotic Vision: Real-World Applications

3.2.1. Industrial Automation

The realm of industrial automation relies significantly on domain adaptation techniques to facilitate the seamless integration of robotic vision systems across various production settings. Domain adaptation empowers robotic vision to maintain consistent and accurate object recognition and manipulation by adapting to changes in illumination, object textures, and camera perspectives [316,317]. For instance, domain adaptation enables robots to proficiently handle various parts and components from diverse sources within robotic assembly lines, ensuring precise grasping and assembly, while optimizing production efficiency and minimizing errors by aligning the robot’s vision with the unique characteristics of each component, as shown in Figure 9.

3.2.2. Agriculture and Farming

In agriculture and farming, where robotic vision systems are deployed for crop monitoring, disease diagnosis, and precision agriculture, domain adaptation holds tremendous potential. By accommodating shifts in ambient conditions, soil types, and crop varieties, domain adaptation enables agricultural robots to tailor their vision for effective data-driven decision-making. For example, in precision agriculture, robotic systems can analyze multispectral and hyperspectral imagery to detect signs of crop stress, nutrient deficiencies, and pest infestations. Domain adaptation ensures precise and timely interventions, optimizing agricultural productivity and resource utilization by adapting to diverse crop types and environmental conditions.

3.2.3. Search and Rescue Missions

Crucial search and rescue operations often entail traversing challenging and hazardous terrains. Here, robotic vision systems, thanks to domain adaptation approaches, exhibit enhanced adaptability, enabling them to perform effectively in unforeseen and unpredictable disaster scenarios. Domain adaptation allows robotic platforms to dynamically adjust their visual perception to varying lighting conditions, structural damage, and debris congestion during search and rescue missions. Domain-adaptive robots can swiftly locate victims and respond to evolving situations, thereby augmenting the effectiveness and success of rescue efforts.
In conclusion, the applications of domain adaptation techniques in computer and robotic vision are extensive, encompassing domains from agricultural robotics to autonomous driving. Domain adaptation unlocks previously unexplored possibilities for reliable and effective performance in real-world contexts, enabling vision systems to adapt to a multitude of environmental variables. As the field continues to evolve through multidisciplinary collaborations and data-driven methodologies, the relevance and impact of domain adaptation in shaping the trajectory of computer and robotic vision technologies will only continue to grow.

4. Conclusions

The major conclusions of our in-depth examination of domain adaptation strategies in computer and robotic vision are outlined in this section. In this article, we emphasize the benefits and importance of cross-domain analysis and talk about the field’s promising future and unresolved issues.

4.1. Summary of Findings

In our analysis of domain adaptation techniques for computer and robotic vision, it is evident that deep learning-based methods, including Domain Adversarial Neural Networks (DANN) and CycleGAN, consistently exhibit superior performance when contrasted with conventional methodologies like Transfer Component Analysis (TCA) and Maximum Mean Discrepancy (MMD). In several real-world contexts, deep learning-based techniques regularly beat conventional approaches with regard to accuracy, recall, precision, and F1-score, among other performance parameters. Additionally, the introduction of approaches like importance reweighting, multi-task learning, and maximum mean discrepancy loss enhances model accuracy, but increases complexity. Generative models like CoGAN and MUNIT show promise for visual translation and adaptation. Models like DANN, while capable of handling significant domain shifts, exhibit sensitivity to hyperparameters. Furthermore, diverse techniques, including adversarial learning, generative adversarial networks, meta-learning, and self-supervised learning, consistently improve domain adaptation performance (Table 8).
When abundant labeled data are present in the target domain, deep learning proves effective, demanding substantial computational resources. However, traditional methods, like TCA and MMD, are pragmatic when target domain data are scarce or interpretability is vital. The choice between these methods hinges on factors like data availability, computational resources, and the need for interpretability.
Specific models, such as refs. [318], [319] and [320], exemplify the potential of these techniques, while combining various approaches, like ensemble learning and self-supervised learning, further enhances performance (Table 9).
The investigation emphasizes the critical role that domain adaptation plays in helping computer and robotic vision systems adapt to shifting settings and deliver reliable performance under difficult circumstances. Vision systems become more dependable and flexible in practical applications because of domain adaptation, which enables the smooth adaptation to changes in illumination, weather, camera views, and object textures. Furthermore, we discover that hybrid methods that combine the advantages of conventional and deep learning-based techniques as well as domain-specific modifications show substantial potential. Accepting domain-specific adjustments enables the creation of specialized solutions that address the special difficulties and traits of application areas. On the other hand, hybrid methods make use of the complementary qualities of many methodologies, offering chances for additional performance enhancements.

4.2. Contributions and Significance of Cross-Domain Analysis

This study’s cross-domain analysis offers significant new knowledge around domain adaptation in computer and robotic vision. Our research shows the efficacy and usefulness of domain adaptation approaches by methodically assessing a variety of methodologies, models, and real-world implementations. Through a meticulous examination of performance metrics, we acquire a comprehensive understanding of the strengths and weaknesses inherent in various strategies. This comprehensive insight equips both researchers and practitioners with the information required to make informed and prudent choices.
This research also shows how domain adaptation has a transformational effect on improving the adaptability, precision, and dependability of vision systems in real-world settings. Applications of domain adaptation in areas including agriculture, industrial automation, medical imaging, and autonomous driving demonstrate how it propels innovation and development across industries. Cross-domain analysis is significant because it can stimulate more study and advancement in domain adaptation. The problems and insights that have been discovered offer fresh avenues for investigation and chances to improve current methods. The research results may also be used to help create domain adaptation models that are more successful and efficient, advancing real-world applications and enhancing user experience.

4.3. Prospects and Open Challenges

The potential for domain adaptation in computer and robotic vision seems bright in the future. We should expect increasingly more advanced domain adaptation strategies that take advantage of self-supervised learning, meta-learning, and adversarial training as deep learning techniques continue to advance. These developments will make it possible for vision systems to learn stronger, more transferrable representations, improving their performance in a variety of circumstances. However, there are several unresolved issues in the subject that require more research. To guarantee successful domain adaptation, it is still essential to choose the right target domains and have access to annotated datasets for those domains. Robotic vision systems must be able to adapt in real-time to changing environments, which calls for creative solutions that strike a compromise between accuracy and efficiency.
Additionally, it continues to be difficult to achieve domain adaptation in circumstances with scant labeled data. This restriction might be removed by improvements in unsupervised and weakly supervised domain adaptation techniques, which would broaden the range of domain adaptation applications. To overcome these difficulties, interdisciplinary partnerships involving computer vision researchers, roboticists, and subject matter specialists are essential. Domain-specific knowledge and skills will be integrated, enhancing the domain adaptation research landscape and promoting the creation of domain-adaptive vision systems that can be used in a variety of real-world settings.
In conclusion, there are many opportunities and difficulties presented by domain adaptation in computer and robotic vision. Our study provides information on the value, importance, and potential of domain adaptation approaches. We can create more robust, flexible, and dependable vision systems that alter how we see and interact with the environment by using the potential of domain adaptation. The evolution of computer and robotic vision technologies is poised to remain influenced by ongoing research endeavors focused on refining domain adaptation techniques. This influence is expected to act as a catalyst, driving progress and fostering innovation in the foreseeable future.

Author Contributions

Conceptualization, Z.F., M.H.T., S.Z. and D.G.-Z.; methodology, Z.F., M.H.T. and S.Z.; validation, M.H.T., Z.F. and D.G.-Z.; formal analysis, Z.F., M.H.T. and S.Z.; investigation, Z.F.; resources, M.H.T. and S.Z.; data curation, Z.F. and M.H.T.; writing—original draft preparation, Z.F., M.H.T. and S.Z.; writing—review and editing, Z.F., S.Z., M.H.T. and D.G.-Z.; supervision, S.Z. and M.H.T.; funding acquisition, M.H.T. and D.G.-Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

The authors extend their sincere appreciation to Kennesaw State University for their invaluable support in covering the publication fees of this paper. This research would not have been possible without financial assistance. The commitment of Kennesaw State University to advancing scholarly contributions is gratefully acknowledged, highlighting their dedication to fostering academic excellence. We are grateful for the opportunity to share our work with the broader scientific community, and this support has significantly contributed to the realization of our research goals.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Q.; Meng, F.; Breckon, T.P. Data augmentation with norm-AE and selective pseudo-labelling for unsupervised domain adaptation. Neural Netw. 2023, 161, 614–625. [Google Scholar] [CrossRef]
  2. Yu, Y.; Chen, W.; Chen, F.; Jia, W.; Lu, Q. Night-time vehicle model recognition based on domain adaptation. Multimed. Tools Appl. 2023, 1–20. [Google Scholar] [CrossRef]
  3. Han, K.; Kim, Y.; Han, D.; Lee, H.; Hong, S. TL-ADA: Transferable Loss-based Active Domain Adaptation. Neural Netw. 2023, 161, 670–681. [Google Scholar] [CrossRef] [PubMed]
  4. Gojić, G.; Vincan, V.; Kundačina, O.; Mišković, D.; Dragan, D. Non-adversarial Robustness of Deep Learning Methods for Computer Vision. arXiv 2023, arXiv:2305.14986. [Google Scholar]
  5. Yu, Z.; Li, J.; Zhu, L.; Lu, K.; Shen, H.T. Classification Certainty Maximization for Unsupervised Domain Adaptation. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 4232–4243. [Google Scholar] [CrossRef]
  6. Ghaffari, R.; Helfroush, M.S.; Khosravi, A.; Kazemi, K.; Danyali, H.; Rutkowski, L. Towards domain adaptation with open-set target data: Review of theory and computer vision applications R1# C1. In Information Fusion; Elsevier: Amsterdam, The Netherlands, 2023; p. 101912. [Google Scholar]
  7. Xu, J.; Xiao, L.; López, A.M. Self-supervised domain adaptation for computer vision tasks. IEEE Access 2019, 7, 156694–156706. [Google Scholar] [CrossRef]
  8. Venkateswara, H.; Panchanathan, S. Domain Adaptation in Computer Vision with Deep Learning; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  9. Petersen, K.; Vakkalanka, S.; Kuzniarz, L. Guidelines for conducting systematic mapping studies in software engineering: An update. Inf. Softw. Technol. 2015, 64, 1–18. [Google Scholar] [CrossRef]
  10. Chen, W.; Hu, H. Generative attention adversarial classification network for unsupervised domain adaptation. Pattern Recognit. 2020, 107, 107440. [Google Scholar] [CrossRef]
  11. Yang, J.; Zou, H.; Zhou, Y.; Xie, L. Robust adversarial discriminative domain adaptation for real-world cross-domain visual recognition. Neurocomputing 2021, 433, 28–36. [Google Scholar] [CrossRef]
  12. Rahman, M.M.; Fookes, C.; Baktashmotlagh, M.; Sridharan, S. On minimum discrepancy estimation for deep domain adaptation. In Domain Adaptation for Visual Understanding; Springer: Berlin/Heidelberg, Germany, 2020; pp. 81–94. [Google Scholar]
  13. Dunnhofer, M.; Martinel, N.; Micheloni, C. Weakly-supervised domain adaptation of deep regression trackers via reinforced knowledge distillation. IEEE Robot. Autom. Lett. 2021, 6, 5016–5023. [Google Scholar] [CrossRef]
  14. Yang, G.; Ding, M.; Zhang, Y. Bi-directional class-wise adversaries for unsupervised domain adaptation. In Applied Intelligence; Springer: Berlin/Heidelberg, Germany, 2022; pp. 1–17. [Google Scholar]
  15. Oza, P.; Sindagi, V.A.; Sharmini, V.V.; Patel, V.M. Unsupervised domain adaptation of object detectors: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 2022, 3217046. [Google Scholar] [CrossRef] [PubMed]
  16. Csurka, G. Deep visual domain adaptation. In Proceedings of the 2020 22nd International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), Timisoara, Romania, 1–4 September 2020; pp. 1–8. [Google Scholar]
  17. Chen, C.; Chen, Z.; Jiang, B.; Jin, X. Joint domain alignment and discriminative feature learning for unsupervised deep domain adaptation. Proc. AAAI Conf. Artif. Intell. 2019, 33, 3296–3303. [Google Scholar] [CrossRef]
  18. Loghmani, M.R.; Robbiano, L.; Planamente, M.; Park, K.; Caputo, B.; Vincze, M. Unsupervised domain adaptation through inter-modal rotation for rgb-d object recognition. IEEE Robot. Autom. Lett. 2020, 5, 6631–6638. [Google Scholar] [CrossRef]
  19. Li, C.; Du, D.; Zhang, L.; Wen, L.; Luo, T.; Wu, Y.; Zhu, P. Spatial attention pyramid network for unsupervised domain adaptation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 481–497. [Google Scholar]
  20. Dourado, A.; Guth, F.; de Campos, T.; Weigang, L. Domain adaptation for holistic skin detection. In Proceedings of the 2021 34th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI), Gramado, Brazil, 18–22 October 2021; pp. 362–369. [Google Scholar]
  21. Venkateswara, H.; Chakraborty, S.; Panchanathan, S. Deep-Learning Systems for Domain Adaptation in Computer Vision: Learning Transferable Feature Representations. IEEE Signal. Process. Mag. 2017, 34, 117–129. [Google Scholar] [CrossRef]
  22. Peng, X.; Li, Y.; Saenko, K. Domain2vec: Domain embedding for unsupervised domain adaptation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 756–774. [Google Scholar]
  23. Bozorgtabar, B.; Mahapatra, D.; Thiran, J.-P. ExprADA: Adversarial domain adaptation for facial expression analysis. Pattern Recognit. 2020, 100, 107111. [Google Scholar] [CrossRef]
  24. Bateson, M.; Kervadec, H.; Dolz, J.; Lombaert, H.; Ben Ayed, I. Source-relaxed domain adaptation for image segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI 2020: 23rd International Conference, Lima, Peru, 4–8 October 2020; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 490–499. [Google Scholar]
  25. Zhang, C.; Zhao, Q. Attention guided for partial domain adaptation. Inf. Sci. N. Y. 2021, 547, 860–869. [Google Scholar] [CrossRef]
  26. Han, C.; Zhou, D.; Xie, Y.; Gong, M.; Lei, Y.; Shi, J. Collaborative representation with curriculum classifier boosting for unsupervised domain adaptation. Pattern Recognit. 2021, 113, 107802. [Google Scholar] [CrossRef]
  27. Wittich, D.; Rottensteiner, F. Appearance based deep domain adaptation for the classification of aerial images. ISPRS J. Photogramm. Remote Sens. 2021, 180, 82–102. [Google Scholar] [CrossRef]
  28. Sahoo, A.; Panda, R.; Feris, R.; Saenko, K.; Das, A. Select, label, and mix: Learning discriminative invariant feature representations for partial domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 4210–4219. [Google Scholar]
  29. Thota, M.; Leontidis, G. Contrastive domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision. and Pattern Recognition, Virtual, 24 June 2021; pp. 2209–2218. [Google Scholar]
  30. Mahyari, A.G.; Locker, T. Domain adaptation for robot predictive maintenance systems. arXiv 2018, arXiv:1809.08626. [Google Scholar]
  31. Hoffman, J.; Tzeng, E.; Park, T.; Zhu, J.; Isola, P.; Saenko, K.; Efros, A.A.; Darrell, T. CyCADA: Cycle-Consistent Adversarial Domain Adaptation. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018. [Google Scholar]
  32. Tzeng, E.; Hoffman, J.; Saenko, K.; Darrell, T. Adversarial Discriminative Domain Adaptation. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2962–2971. [Google Scholar]
  33. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  34. Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A. DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
  35. Ros, G.; Sellart, L.; Materzynska, J.; Vázquez, D.; López, A.M. The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 3234–3243. [Google Scholar]
  36. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar] [CrossRef]
  37. Richter, S.R.; Vineet, V.; Roth, S.; Koltun, V. Playing for Data: Ground Truth from Computer Games. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Springer International Publishing: Berlin/Heidelberg, Germany, 2016. [Google Scholar] [CrossRef]
  38. Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The Cityscapes Dataset for Semantic Urban Scene Understanding. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar]
  39. Long, M.; Cao, Y.; Wang, J.; Jordan, M.I. Learning Transferable Features with Deep Adaptation Networks. In Proceedings of the International Conference on Machine Learning, Lille, France, 7–9 July 2015. [Google Scholar]
  40. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar]
  41. Kumari, S.; Singh, P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. arXiv 2023, arXiv:2308.01265. [Google Scholar]
  42. Liang, J.; He, R.; Tan, T.P. A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts. arXiv 2023, arXiv:2303.15361. [Google Scholar]
  43. Zhao, D.; Wang, S.; Zang, Q.; Quan, D.; Ye, X.; Jiao, L. Towards Better Stability and Adaptability: Improve Online Self-Training for Model Adaptation in Semantic Segmentation. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023. [Google Scholar]
  44. Fang, Y.; Yap, P.; Lin, W.; Zhu, H.; Liu, M. Source-Free Unsupervised Domain Adaptation: A Survey. arXiv 2022, arXiv:2301.00265. [Google Scholar]
  45. Wang, Y.; Liang, J.; Zhang, Z. Source Data-Free Cross-Domain Semantic Segmentation: Align, Teach and Propagate. arXiv 2021, arXiv:2106.11653. [Google Scholar]
  46. Paul, S.; Khurana, A.; Aggarwal, G. Unsupervised Adaptation of Semantic Segmentation Models without Source Data. arXiv 2021, arXiv:2112.02359. [Google Scholar]
  47. Wang, Y.; Liang, J.; Zhang, Z.; Xiao, J.; Mei, S.; Zhang, Z. Domain Adaptive Semantic Segmentation without Source Data: Align, Teach and Propagate. arXiv 2021, arXiv:2110.06484v1. [Google Scholar]
  48. Csurka, G.; Volpi, R.; Chidlovskii, B. Unsupervised Domain Adaptation for Semantic Image Segmentation: A Comprehensive Survey. arXiv 2021, arXiv:2112.03241. [Google Scholar]
  49. Akkaya, I.B.; Halici, U. Self-training via Metric Learning for Source-Free Domain Adaptation of Semantic Segmentation. arXiv 2022, arXiv:2212.04227. [Google Scholar]
  50. Csurka, G.; Volpi, R.; Chidlovskii, B. Semantic Image Segmentation: Two Decades of Research. Found. Trends Comput. Graph. Vis. 2023, 14, 1–162. [Google Scholar] [CrossRef]
  51. Redmon, J.; Divvala, S.; Girshick, R.B.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
  52. Lin, T.Y.; Goyal, P.; Girshick, R.B.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2999–3007. [Google Scholar]
  53. Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.E.; Fu, C.-Y.; Berg, A. SSD: Single Shot MultiBox Detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 21–37. [Google Scholar]
  54. Redmon, J.; Farhadi, A. YOLOv3: An Incremental Improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
  55. Biswas, D.; Tevsi’c, J. Progressive Domain Adaptation with Contrastive Learning for Object Detection in the Satellite Imagery. arXiv 2022, arXiv:2209.02564. [Google Scholar] [CrossRef]
  56. Ren, S.; He, K.; Girshick, R.B.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 39, 1137–1149. [Google Scholar] [CrossRef]
  57. Lin, T.Y.; Maire, M.; Belongie, S.J.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Proceedings of the Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Springer International Publishing: Berlin/Heidelberg, Germany, 2014; pp. 740–755. [Google Scholar]
  58. Lin, T.Y.; Dollár, P.; Girshick, R.B.; He, K.; Hariharan, B.; Belongie, S.J. Feature Pyramid Networks for Object Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
  59. Bochkovskiy, A.; Wang, C.-Y.; Liao, H. YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
  60. Cheng, G.; Yuan, X.; Yao, X.; Yan, K.; Zeng, Q.; Han, J. Towards Large-Scale Small Object Detection: Survey and Benchmarks. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 13467–13488. [Google Scholar] [CrossRef] [PubMed]
  61. Zhang, X.; Feng, Y.; Zhang, S.; Wang, N.; Mei, S.; He, M. Semi-Supervised Person Detection in Aerial Images with Instance Segmentation and Maximum Mean Discrepancy Distance. Remote Sens. 2023, 15, 2928. [Google Scholar] [CrossRef]
  62. Xiong, Z.; Song, T.; He, S.; Yao, Z.; Wu, X. A unified and costless approach for improving small and long-tail object detection in aerial images of traffic scenarios. Appl. Intell. 2022, 53, 14426–14447. [Google Scholar] [CrossRef]
  63. Leng, J.; Mo, M.; Zhou, Y.; Gao, C.; Li, W.; Gao, X. Pareto Refocusing for Drone-View Object Detection. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 1320–1334. [Google Scholar] [CrossRef]
  64. Xu, C.; Wang, J.; Yang, W.; Yu, H.; Yu, L.; Xia, G. RFLA: Gaussian Receptive Field based Label Assignment for Tiny Object Detection. In European Conference on Computer Vision; Springer Nature: Cham, Switzerland, 2022; pp. 526–543. [Google Scholar]
  65. Liu, Y.; Li, W.; Tan, L.; Huang, X.; Zhang, H.; Jiang, X. DB-YOLOv5: A UAV Object Detection Model Based on Dual Backbone Network for Security Surveillance. Electronics 2023, 12, 3296. [Google Scholar] [CrossRef]
  66. Wan, Y.; Liao, Z.; Liu, J.; Song, W.; Ji, H.; Gao, Z. Small object detection leveraging density-aware scale adaptation. Photogramm. Rec. 2023, 38, 160–175. [Google Scholar] [CrossRef]
  67. Zhang, Y.; Wu, C.; Guo, W.; Zhang, T.; Li, W. CFANet: Efficient Detection of UAV Image Based on Cross-Layer Feature Aggregation. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–11. [Google Scholar] [CrossRef]
  68. Li, X.; Diao, W.; Mao, Y.; Gao, P.; Mao, X.; Li, X.; Sun, X. OGMN: Occlusion-guided Multi-task Network for Object Detection in UAV Images. arXiv 2023, arXiv:2304.11805. [Google Scholar] [CrossRef]
  69. Lu, S.; Lu, H.; Dong, J.; Wu, S. Object Detection for UAV Aerial Scenarios Based on Vectorized IOU. Sensors 2023, 23, 3061. [Google Scholar] [CrossRef]
  70. Zhou, B.; Khosla, A.; Lapedriza, À.; Oliva, A.; Torralba, A. Learning Deep Features for Discriminative Localization. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 2921–2929. [Google Scholar]
  71. Singh, K.; Lee, Y.J. Hide-and-Seek: Forcing a Network to be Meticulous for Weakly-Supervised Object and Action Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 3544–3553. [Google Scholar]
  72. Yun, S.; Han, D.; Oh, S.; Chun, S.; Choe, J.; Yoo, Y.J. CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6022–6031. [Google Scholar]
  73. Choe, J.; Shim, H. Attention-Based Dropout Layer for Weakly Supervised Object Localization. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seoul, Republic of Korea, 27–28 October 2019; pp. 2214–2223. [Google Scholar]
  74. Xue, H.; Liu, C.; Wan, F.; Jiao, J.; Ji, X.; Ye, Q. DANet: Divergent Activation for Weakly Supervised Object Localization. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6588–6597. [Google Scholar]
  75. Zhang, X.; Wei, Y.; Feng, J.; Yang, Y.; Huang, T.S. Adversarial Complementary Learning for Weakly Supervised Object Localization. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1325–1334. [Google Scholar]
  76. Zhang, X.; Wei, Y.; Kang, G.; Yang, Y.; Huang, T. Self-produced Guidance for Weakly-supervised Object Localization. arXiv 2018, arXiv:1807.08902. [Google Scholar]
  77. Wah, C.; Branson, S.; Welinder, P.; Perona, P.; Belongie, S. The Caltech-Ucsd Birds-200-2011 Dataset. 2011. Available online: https://authors.library.caltech.edu/records/cvm3y-5hh21 (accessed on 11 September 2023).
  78. Zhao, Y.; Ye, Q.; Wu, W.; Shen, C.; Wan, F. Generative Prompt Model for Weakly Supervised Object Localization. arXiv 2023, arXiv:2307.09756. [Google Scholar]
  79. Xie, J.; Luo, Z.; Li, Y.; Liu, H.; Shen, L.; Shou, M.Z. Open-World Weakly-Supervised Object Localization. arXiv 2023, arXiv:2304.08271. [Google Scholar]
  80. Shao, F.; Luo, Y.; Wu, S.; Li, Q.; Gao, F.; Yang, Y.; Xiao, J. Further Improving Weakly-supervised Object Localization via Causal Knowledge Distillation. arXiv 2023, arXiv:2301.01060. [Google Scholar]
  81. Shaharabany, T.; Tewel, Y.; Wolf, L. What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text Inputs. arXiv 2022, arXiv:2206.09358. [Google Scholar]
  82. Xu, L.; Ouyang, W.; Bennamoun, M.; Boussaid, F.; Xu, D. Learning Multi-Modal Class-Specific Tokens for Weakly Supervised Dense Object Localization. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18 June 2023; pp. 19596–19605. [Google Scholar]
  83. Shao, F.; Chen, L.; Shao, J.; Ji, W.; Xiao, S.; Ye, L.; Zhuang, Y.; Xiao, J. Deep Learning for Weakly-Supervised Object Detection and Localization: A Survey. Neurocomputing 2022, 496, 192–207. [Google Scholar] [CrossRef]
  84. Xu, R.; Luo, Y.; Hu, H.; Du, B.; Shen, J.; Wen, Y. Rethinking the Localization in Weakly Supervised Object Localization. arXiv 2023, arXiv:2308.06161. [Google Scholar]
  85. Planamente, M.; Plizzari, C.; Cannici, M.; Ciccone, M.; Strada, F.; Bottino, A.; Matteucci, M.; Caputo, B. Da4event: Towards bridging the sim-to-real gap for event cameras using domain adaptation. IEEE Robot. Autom. Lett. 2021, 6, 6616–6623. [Google Scholar] [CrossRef]
  86. Chen, C.; Xie, W.; Wen, Y.; Huang, Y.; Ding, X. Multiple-source domain adaptation with generative adversarial nets. Knowl. Based Syst. 2020, 199, 105962. [Google Scholar] [CrossRef]
  87. Bucci, S.; Loghmani, M.R.; Tommasi, T. On the effectiveness of image rotation for open set domain adaptation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 422–438. [Google Scholar]
  88. Athanasiadis, C.; Hortal, E.; Asteriadis, S. Audio–visual domain adaptation using conditional semi-supervised generative adversarial networks. Neurocomputing 2020, 397, 331–344. [Google Scholar] [CrossRef]
  89. Scalbert, M.; Vakalopoulou, M.; Couzinié-Devy, F. Multi-source domain adaptation via supervised contrastive learning and confident consistency regularization. arXiv 2021, arXiv:2106.16093. [Google Scholar]
  90. Roy, S.; Trapp, M.; Pilzer, A.; Kannala, J.; Sebe, N.; Ricci, E.; Solin, A. Uncertainty-guided source-free domain adaptation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 537–555. [Google Scholar]
  91. Wang, Y.; Nie, L.; Li, Y.; Chen, S. Soft large margin clustering for unsupervised domain adaptation. Knowl. Based Syst. 2020, 192, 105344. [Google Scholar] [CrossRef]
  92. Zhang, C.; Zhao, Q.; Wang, Y. Transferable attention networks for adversarial domain adaptation. Inf. Sci. N. Y. 2020, 539, 422–433. [Google Scholar] [CrossRef]
  93. Hou, J.; Ding, X.; Deng, J.D.; Cranefield, S. Unsupervised domain adaptation using deep networks with cross-grafted stacks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 27 October–2 November 2019. [Google Scholar]
  94. Huang, J.; Guan, D.; Xiao, A.; Lu, S. Rda: Robust domain adaptation via fourier adversarial attacking. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 8988–8999. [Google Scholar]
  95. Kim, T.; Kim, C. Attract, perturb, and explore: Learning a feature alignment network for semi-supervised domain adaptation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 591–607. [Google Scholar]
  96. Kieu, M.; Bagdanov, A.D.; Bertini, M.; Del Bimbo, A. Domain adaptation for privacy-preserving pedestrian detection in thermal imagery. In Proceedings of the Image Analysis and Processing–ICIAP 2019: 20th International Conference, Trento, Italy, 9–13 September 2019; pp. 203–213. [Google Scholar]
  97. Porav, H.; Bruls, T.; Newman, P. Don’t worry about the weather: Unsupervised condition-dependent domain adaptation. In Proceedings of the 2019 IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27–30 October 2019; pp. 33–40. [Google Scholar]
  98. Xu, Y.; Chen, L.; Duan, L.; Tsang, I.W.; Luo, J. Open Set Domain Adaptation With Soft Unknown-Class Rejection. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 1601–1612. [Google Scholar] [CrossRef]
  99. Braytee, A.; Naji, M.; Kennedy, P.J. Unsupervised domain-adaptation-based tensor feature learning with structure preservation. IEEE Trans. Artif. Intell. 2022, 3, 370–380. [Google Scholar] [CrossRef]
  100. Fujii, K.; Kawamoto, K. Generative and self-supervised domain adaptation for one-stage object detection. Array 2021, 11, 100071. [Google Scholar] [CrossRef]
  101. Kurmi, V.K.; Kumar, S.; Namboodiri, V.P. Attending to discriminative certainty for domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 491–500. [Google Scholar]
  102. Li, R.; Jia, X.; He, J.; Chen, S.; Hu, Q. T-svdnet: Exploring high-order prototypical correlations for multi-source domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 9991–10000. [Google Scholar]
  103. Zhao, S.; Fu, H.; Gong, M.; Tao, D. Geometry-aware symmetric domain adaptation for monocular depth estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9788–9798. [Google Scholar]
  104. Wen, J.; Yuan, J.; Zheng, Q.; Liu, R.; Gong, Z.; Zheng, N. Hierarchical domain adaptation with local feature patterns. Pattern Recognit. 2022, 124, 108445. [Google Scholar] [CrossRef]
  105. Jiang, Q.; Zhang, Y.; Bao, F.; Zhao, X.; Zhang, C.; Liu, P. Two-step domain adaptation for underwater image enhancement. Pattern Recognit. 2022, 122, 108324. [Google Scholar] [CrossRef]
  106. Zhang, L.; Fu, J.; Wang, S.; Zhang, D.; Dong, Z.; Chen, C.L.P. Guide subspace learning for unsupervised domain adaptation. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 3374–3388. [Google Scholar] [CrossRef]
  107. Wang, J.; Chen, J.; Lin, J.; Sigal, L.; de Silva, C.W. Discriminative feature alignment: Improving transferability of unsupervised domain adaptation by Gaussian-guided latent alignment. Pattern Recognit. 2021, 116, 107943. [Google Scholar] [CrossRef]
  108. Delussu, R.; Putzu, L.; Fumera, G.; Roli, F. Online domain adaptation for person re-identification with a human in the loop. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 3829–3836. [Google Scholar]
  109. Kang, Q.; Yao, S.; Zhou, M.; Zhang, K.; Abusorrah, A. Effective visual domain adaptation via generative adversarial distribution matching. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 3919–3929. [Google Scholar] [CrossRef] [PubMed]
  110. Klingner, M.; Termöhlen, J.-A.; Ritterbach, J.; Fingscheidt, T. Unsupervised batchnorm adaptation (ubna): A domain adaptation method for semantic segmentation without using source domain representations. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2022; pp. 210–220. [Google Scholar]
  111. Sun, T.; Segu, M.; Postels, J.; Wang, Y.; Van Gool, L.; Schiele, B.; Tombari, F.; Yu, F. SHIFT: A synthetic driving dataset for continuous multi-task domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 21371–21382. [Google Scholar]
  112. Zhang, Y.; Wang, N.; Cai, S. Adversarial sliced Wasserstein domain adaptation networks. Image Vis. Comput. 2020, 102, 103974. [Google Scholar] [CrossRef]
  113. Guizilini, V.; Li, J.; Ambru, R.; Gaidon, A. Geometric unsupervised domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 8537–8547. [Google Scholar]
  114. Han, C.; Zhou, D.; Xie, Y.; Lei, Y.; Shi, J. Label propagation with multi-stage inference for visual domain adaptation. Knowl. Based Syst. 2021, 216, 106809. [Google Scholar] [CrossRef]
  115. Sun, Y.; Tzeng, E.; Darrell, T.; Efros, A.A. Unsupervised domain adaptation through self-supervision. arXiv 2019, arXiv:1909.11825. [Google Scholar]
  116. Shirdel, G.; Ghanbari, A. A survey on self-supervised learning methods for domain adaptation in deep neural networks focusing on the optimization problems. AUT J. Math. Comput. 2022, 3, 217–235. [Google Scholar]
  117. Yue, X.; Zheng, Z.; Zhang, S.; Gao, Y.; Darrell, T.; Keutzer, K.; Vincentelli, A.S. Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 19–25 June 2021; pp. 13834–13844. [Google Scholar]
  118. Sanabria, R.; Zambonelli, F.; Dobson, S.; Ye, J. ContrasGAN: Unsupervised domain adaptation in Human Activity Recognition via adversarial and contrastive learning. Pervasive Mob. Comput. 2021, 78, 101477. [Google Scholar] [CrossRef]
  119. Yazdanpanah, M.; Moradi, P. Visual Domain Bridge: A source-free domain adaptation for cross-domain few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 2868–2877. [Google Scholar]
  120. Liang, J.; Hu, D.; Wang, Y.; He, R.; Feng, J. Source data-absent unsupervised domain adaptation through hypothesis transfer and labeling transfer. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 8602–8617. [Google Scholar] [CrossRef]
  121. Shin, I.; Woo, S.; Pan, F.; Kweon, I.S. Two-phase pseudo label densification for self-training based domain adaptation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 532–548. [Google Scholar]
  122. Chen, L.; Lou, Y.; He, J.; Bai, T.; Deng, M. Geometric anchor correspondence mining with uncertainty modeling for universal domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision. and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 16134–16143. [Google Scholar]
  123. Gabourie, J.; Rostami, M.; Pope, P.E.; Kolouri, S.; Kim, K. Learning a domain-invariant embedding for unsupervised domain adaptation using class-conditioned distribution alignment. In Proceedings of the 2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 24–27 September 2019; pp. 352–359. [Google Scholar]
  124. Han; Lei, Y.; Xie, Y.; Zhou, D.; Gong, M. Visual domain adaptation based on modified A- distance and sparse filtering. Pattern Recognit. 2020, 104, 107254. [Google Scholar]
  125. Li, H.; Wang, X.; Shen, F.; Li, Y.; Porikli, F.; Wang, M. Real-time deep tracking via corrective domain adaptation. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 2600–2612. [Google Scholar] [CrossRef]
  126. Chen, Z.; Zhuang, J.; Liang, X.; Lin, L. Blending-target domain adaptation by adversarial meta-adaptation networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2248–2257. [Google Scholar]
  127. Tang, H.; Zhao, Y.; Lu, H. Unsupervised person re-identification with iterative self-supervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 15–20 June 2019. [Google Scholar]
  128. Xu, R.; Liu, P.; Wang, L.; Chen, C.; Wang, J. Reliable weighted optimal transport for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Virtual, 14–19 June 2020; pp. 4394–4403. [Google Scholar]
  129. de CG Pereira, T.; de Campos, T.E. Domain Adaptation for Person Re-identification on New Unlabeled Data. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, Valletta, Malta, 27–29 February 2020; pp. 695–703. [Google Scholar]
  130. Yue, Z.; Kratzwald, B.; Feuerriegel, S. Contrastive domain adaptation for question answering using limited text corpora. arXiv 2021, arXiv:2108.13854. [Google Scholar]
  131. Liu, H.; Cao, Z.; Long, M.; Wang, J.; Yang, Q. Separate to adapt: Open set domain adaptation via progressive separation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 2927–2936. [Google Scholar]
  132. Ning, M.; Lu, D.; Wei, D.; Bian, C.; Yuan, C.; Yu, S.; Ma, K.; Zheng, Y. Multi-anchor active domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 9112–9122. [Google Scholar]
  133. Zhao, S.; Li, B.; Xu, P.; Keutzer, K. Multi-source domain adaptation in the deep learning era: A systematic survey. arXiv 2020, arXiv:2002.12169. [Google Scholar]
  134. Huang; Lu, S.; Guan, D.; Zhang, X. Contextual-relation consistent domain adaptation for semantic segmentation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 705–722. [Google Scholar]
  135. Tang, G.; Gao, X.; Chen, Z.; Zhong, H. Unsupervised adversarial domain adaptation with similarity diffusion for person re-identification. Neurocomputing 2021, 442, 337–347. [Google Scholar] [CrossRef]
  136. Feng, C.; He, Z.; Wang, J.; Lin, Q.; Zhu, Z.; Lu, J.; Xie, S. Domain adaptation with SBADA-GAN and Mean Teacher. Neurocomputing 2020, 396, 577–586. [Google Scholar] [CrossRef]
  137. Batanina, E.; Bekkouch, I.E.I.; Youssry, Y.; Khan, A.; Khattak, A.M.; Bortnikov, M. Domain adaptation for car accident detection in videos. In Proceedings of the 2019 Ninth International Conference on Image Processing Theory, Tools and Applications (IPTA), Istanbul, Turkey, 6–9 November 2019; pp. 1–6. [Google Scholar]
  138. Morerio, P.; Volpi, R.; Ragonesi, R.; Murino, V. Generative pseudo-label refinement for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 14–19 June 2020; pp. 3130–3139. [Google Scholar]
  139. Ayalew, T.W.; Ubbens, J.R.; Stavness, I. Unsupervised domain adaptation for plant organ counting. In Proceedings of the Computer Vision–ECCV 2020 Workshops, Glasgow, UK, 23–28 August 2020; pp. 330–346. [Google Scholar]
  140. Wang, S.; Wang, B.; Zhang, Z.; Heidari, A.A.; Chen, H. Class-aware sample reweighting optimal transport for multi-source domain adaptation. Neurocomputing 2023, 523, 213–223. [Google Scholar] [CrossRef]
  141. da Costa, P.R.D.O.; Akçay, A.; Zhang, Y.; Kaymak, U. Remaining useful lifetime prediction via deep domain adaptation. Reliab. Eng. Syst. Saf. 2020, 195, 106682. [Google Scholar] [CrossRef]
  142. Yang, S.; Wang, Y.; Van De Weijer, J.; Herranz, L.; Jui, S. Generalized source-free domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 8978–8987. [Google Scholar]
  143. Ahmed, W.; Morerio, P.; Murino, V. Cleaning noisy labels by negative ensemble learning for source-free unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 4–8 January 2022; pp. 1616–1625. [Google Scholar]
  144. Rahman, M.M.; Fookes, C.; Baktashmotlagh, M.; Sridharan, S. Correlation-aware adversarial domain adaptation and generalization. Pattern Recognit. 2020, 100, 107124. [Google Scholar] [CrossRef]
  145. Kurmi, V.K.; Namboodiri, V.P. Looking back at labels: A class based domain adaptation technique. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
  146. Truong, T.-D.; Duong, C.N.; Le, N.; Phung, S.L.; Rainwater, C.; Luu, K. Bimal: Bijective maximum likelihood approach to domain adaptation in semantic scene segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 8548–8557. [Google Scholar]
  147. Tian, Y.; Zhu, S. Partial domain adaptation on semantic segmentation. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 3798–3809. [Google Scholar] [CrossRef]
  148. Lin, C.; Li, Y.; Liu, Y.; Wang, X.; Geng, S. Building damage assessment from post-hurricane imageries using unsupervised domain adaptation with enhanced feature discrimination. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–10. [Google Scholar] [CrossRef]
  149. Xu, R.; Li, G.; Yang, J.; Lin, L. Larger norm more transferable: An adaptive feature norm approach for unsupervised domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 27 October–2 November 2019; pp. 1426–1435. [Google Scholar]
  150. Hartley, Z.K.J.; French, A.P. Domain adaptation of synthetic images for wheat head detection. Plants 2021, 10, 2633. [Google Scholar] [CrossRef] [PubMed]
  151. Csurka, G.; Hospedales, T.M.; Salzmann, M.; Tommasi, T. Visual Domain Adaptation in the Deep Learning Era; Morgan & Claypool Publishers: Kentfield, CA, USA, 2022; ISBN 9781636393421. [Google Scholar]
  152. Kurmi, V.K.; Subramanian, V.K.; Namboodiri, V.P. Domain impression: A source data free domain adaptation method. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 615–625. [Google Scholar]
  153. Jhoo, W.Y.; Heo, J.-P. Collaborative learning with disentangled features for zero-shot domain adaptation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 8896–8905. [Google Scholar]
  154. Yang; Balaji, Y.; Lim, S.-N.; Shrivastava, A. Curriculum manager for source selection in multi-source domain adaptation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 608–624. [Google Scholar]
  155. Luo, X.; Liu, S.; Fu, K.; Wang, M.; Song, Z. A learnable self-supervised task for unsupervised domain adaptation on point clouds. arXiv 2021, arXiv:2104.05164. [Google Scholar]
  156. Subhani, N.; Ali, M. Learning from scale-invariant examples for domain adaptation in semantic segmentation. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 290–306. [Google Scholar]
  157. Kieu; Bagdanov, A.D.; Bertini, M.; Del Bimbo, A. Task-conditioned domain adaptation for pedestrian detection in thermal imagery. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 546–562. [Google Scholar]
  158. Azuma, C.; Ito, T.; Shimobaba, T. Adversarial domain adaptation using contrastive learning. Eng. Appl. Artif. Intell. 2023, 123, 106394. [Google Scholar] [CrossRef]
  159. Zheng, L.; Ma, W.; Cai, Y.; Lu, T.; Wang, S. GPDAN: Grasp Pose Domain Adaptation Network for Sim-to-Real 6-DoF Object Grasping. IEEE Robot. Autom. Lett. 2023, 2023, 3286816. [Google Scholar] [CrossRef]
  160. Sun, H.; Li, M. Enhancing unsupervised domain adaptation by exploiting the conceptual consistency of multiple self-supervised tasks. Sci. China Inf. Sci. 2023, 66, 142101. [Google Scholar] [CrossRef]
  161. Huang, X.; Choi, K.-S.; Zhou, N.; Zhang, Y.; Chen, B.; Pedrycz, W. Shallow Inception Domain Adaptation Network for EEG-based Motor Imagery Classification. IEEE Trans. Cogn. Dev. Syst. 2023, 2023, 3279262. [Google Scholar] [CrossRef]
  162. Zuo, Y.; Yao, H.; Zhuang, L.; Xu, C. Dual Structural Knowledge Interaction for Domain Adaptation. IEEE Trans. Multimed. 2023, 99, 1–15. [Google Scholar] [CrossRef]
  163. Chen, J.; He, P.; Zhu, J.; Guo, Y.; Sun, G.; Deng, M.; Li, H. Memory-Contrastive Unsupervised Domain Adaptation for Building Extraction of High-Resolution Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15. [Google Scholar] [CrossRef]
  164. Rizzoli, G.; Shenaj, D.; Zanuttigh, P. Source-Free Domain Adaptation for RGB-D Semantic Segmentation with Vision Transformers. arXiv 2023, arXiv:2305.14269. [Google Scholar]
  165. Wu, Z.; Li, Z.; Wei, D.; Shang, H.; Guo, J.; Chen, X.; Rao, Z.; Yu, Z.; Yang, J.; Li, S.; et al. Improving Neural Machine Translation Formality Control with Domain Adaptation and Reranking-based Transductive Learning. In Proceedings of the 20th International Conference on Spoken Language Translation (IWSLT 2023), Toronto, ON, Canada, 13–14 July 2023; pp. 180–186. [Google Scholar] [CrossRef]
  166. Zhou, J.; Tian, Q.; Lu, Z. Progressive decoupled target-into-source multi-target domain adaptation. Inf. Sci. N. Y. 2023, 634, 140–156. [Google Scholar] [CrossRef]
  167. Ma, X.; Zhang, X.; Wang, Z.; Pun, M.-O. Unsupervised domain adaptation augmented by mutually boosted attention for semantic segmentation of vhr remote sensing images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–15. [Google Scholar] [CrossRef]
  168. Westfechtel, T.; Yeh, H.-W.; Meng, Q.; Mukuta, Y.; Harada, T. Backprop Induced Feature Weighting for Adversarial Domain Adaptation with Iterative Label Distribution Alignment. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 2–7 January 2023; pp. 392–401. [Google Scholar]
  169. Park, J.; Barnard, F.; Hossain, S.; Rambhatla, S.; Fieguth, P. Is Generative Modeling-based Stylization Necessary for Domain Adaptation in Regression Tasks? arXiv 2023, arXiv:2306.01706. [Google Scholar]
  170. Liu, X.; Zhou, S.; Lei, T.; Jiang, P.; Chen, Z.; Lu, H. First-Person Video Domain Adaptation with Multi-Scene Cross-Site Datasets and Attention-Based Methods. IEEE Trans. Circuits Syst. Video Technol. 2023, 2023, 3281671. [Google Scholar] [CrossRef]
  171. Mullick, K.; Jain, H.; Gupta, S.; Kale, A.A. Domain Adaptation of Synthetic Driving Datasets for Real-World Autonomous Driving. arXiv 2023, arXiv:2302.04149. [Google Scholar]
  172. Carrazco, J.I.D.; Kadam, S.K.; Morerio, P.; Del Bue, A.; Murino, V. Target-driven One-Shot Unsupervised Domain Adaptation. arXiv 2023, arXiv:2305.04628. [Google Scholar]
  173. Yu, Q.; Xi, N.; Yuan, J.; Zhou, Z.; Dang, K.; Ding, X. Source-Free Domain Adaptation for Medical Image Segmentation via Prototype-Anchored Feature Alignment and Contrastive Learning. arXiv 2023, arXiv:2307.09769. [Google Scholar]
  174. Goel, P.; Ganatra, A. Unsupervised Domain Adaptation for Image Classification and Object Detection Using Guided Transfer Learning Approach and JS Divergence. Sensors 2023, 23, 4436. [Google Scholar] [CrossRef]
  175. Liang, Y.; Wu, W.; Li, H.; Han, F.; Liu, Z.; Xu, P.; Lian, X.; Chen, X. WiAi-ID: Wi-Fi-Based Domain Adaptation for Appearance-independent Passive Person Identification. IEEE Internet Things J. 2023, 2023, 3288767. [Google Scholar] [CrossRef]
  176. Zhou, H.; Chang, Y.; Yan, W.; Yan, L. Unsupervised Cumulative Domain Adaptation for Foggy Scene Optical Flow. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 9569–9578. [Google Scholar]
  177. Niu, Z.; Wang, H.; Sun, H.; Ouyang, S.; Chen, Y.; Lin, L. MCKD: Mutually Collaborative Knowledge Distillation for Federated Domain Adaptation And Generalization. In Proceedings of the ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
  178. Houben, T.; Huisman, T.; Pisarenco, M.; van der Sommen, F.; de With, P. Training procedure for scanning electron microscope 3D surface reconstruction using unsupervised domain adaptation with simulated data. J. Micro/Nanopatterning Mater. Metrol. 2023, 22, 31208. [Google Scholar] [CrossRef]
  179. Capliez, E.; Ienco, D.; Gaetano, R.; Baghdadi, N.; Salah, A.H. Temporal-Domain Adaptation for Satellite Image Time-Series Land-Cover Mapping With Adversarial Learning and Spatially Aware Self-Training. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 3645–3675. [Google Scholar] [CrossRef]
  180. Zhao, R.; Zhu, Y.; Li, Y. CLA: A self-supervised contrastive learning method for leaf disease identification with domain adaptation. Comput. Electron. Agric. 2023, 211, 107967. [Google Scholar] [CrossRef]
  181. Neff, C.; Pazho, A.D.; Tabkhi, H. Real-Time Online Unsupervised Domain Adaptation for Real-World Person Re-identification. arXiv 2023, arXiv:2306.03993. [Google Scholar] [CrossRef]
  182. Taufique, M.N.; Jahan, C.S.; Savakis, A. Continual Unsupervised Domain Adaptation in Data-Constrained Environments. IEEE Trans. Artif. Intell. 2023, 2022, 323379. [Google Scholar] [CrossRef]
  183. Wu, W. Tea Leaf Disease Classification using Domain Adaptation Method. Front. Comput. Intell. Syst. 2023, 3, 48–50. [Google Scholar] [CrossRef]
  184. Wang, Y.; Luo, F. A Survey of Crowd Counting Algorithm Based on Domain Adaptation. Acad. J. Sci. Technol. 2023, 5, 35–37. [Google Scholar] [CrossRef]
  185. Hu, X.; Huang, Y.; Li, B.; Lu, T. Inclusive FinTech Lending via Contrastive Learning and Domain Adaptation. arXiv 2023, arXiv:2305.05827. [Google Scholar]
  186. Liu, Z.; Shi, K.; Niu, D.; Huo, H.; Zhang, K. Dynamic classifier approximation for unsupervised domain adaptation. Signal. Process. 2023, 206, 108915. [Google Scholar] [CrossRef]
  187. Chen, D.; Zhu, H.; Yang, S. UC-SFDA: Source-free domain adaptation via uncertainty prediction and evidence-based contrastive learning. Knowl. Based Syst. 2023, 2023, 110728. [Google Scholar] [CrossRef]
  188. Hu, X.; Zhu, Y. Dual Frame-Level and Region-Level Alignment for Unsupervised Video Domain Adaptation. Neurocomputing 2023, 2023, 126454. [Google Scholar] [CrossRef]
  189. Zhang, Y.; Ji, J.; Ren, Z.; Ni, Q.; Gu, F.; Feng, K.; Yu, K.; Ge, J.; Lei, Z.; Liu, Z. Digital twin-driven partial domain adaptation network for intelligent fault diagnosis of rolling bearing. Reliab. Eng. Syst. Saf. 2023, 234, 109186. [Google Scholar] [CrossRef]
  190. Fu, S.; Chen, J.; Chen, D.; He, C. CNNs/ViTs-CNNs/ViTs: Mutual distillation for unsupervised domain adaptation. Inf. Sci. N. Y. 2023, 622, 83–97. [Google Scholar] [CrossRef]
  191. Du, Y.; Zhou, Y.; Xie, Y.; Zhou, D.; Shi, J.; Lei, Y. Unsupervised domain adaptation via progressive positioning of target-class prototypes. Knowl. Based Syst. 2023, 273, 110586. [Google Scholar] [CrossRef]
  192. Zhou, X.; Tian, Y.; Wang, X. MEC-DA: Memory-Efficient Collaborative Domain Adaptation for Mobile Edge Devices. IEEE Trans. Mob. Comput. 2023, 2023, 3282941. [Google Scholar] [CrossRef]
  193. Feng, Y.; Luo, Y.; Yang, J. Cross-platform privacy-preserving CT image COVID-19 diagnosis based on source-free domain adaptation. Knowl. Based Syst. 2023, 264, 110324. [Google Scholar] [CrossRef] [PubMed]
  194. Zhou, Q.; Gu, Q.; Pang, J.; Lu, X.; Ma, L. Self-adversarial disentangling for specific domain adaptation. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 2023, 3238727. [Google Scholar] [CrossRef]
  195. Wang; Islam, M.; Xu, M.; Ren, H. Curriculum-Based Augmented Fourier Domain Adaptation for Robust Medical Image Segmentation. arXiv 2023, arXiv:2306.03511. [Google Scholar]
  196. Wang, J.; Wu, Z. Driver distraction detection via multi-scale domain adaptation network. IET Intell. Transp. Syst. 2023. [Google Scholar] [CrossRef]
  197. Hur, S.; Shin, I.; Park, K.; Woo, S.; Kweon, I.S. Learning Classifiers of Prototypes and Reciprocal Points for Universal Domain Adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 531–540. [Google Scholar]
  198. Fan, C.; Jin, Y.; Liu, P.; Zhao, W. Transferable visual pattern memory network for domain adaptation in anomaly detection. Eng. Appl. Artif. Intell. 2023, 121, 106013. [Google Scholar] [CrossRef]
  199. Fu, Z.; Wang, S.; Zhao, X.; Long, S.; Wang, B. Causal view mechanism for adversarial domain adaptation. In Multimedia Tools and Applications; Springer: Berlin/Heidelberg, Germany, 2023; pp. 1–20. [Google Scholar]
  200. Ding, F.; Li, J.; Tian, W.; Zhang, S.; Yuan, W. Unsupervised Domain Adaptation Via Risk-Consistent Estimators. IEEE Trans. Multimed. 2023, 2023, 3277275. [Google Scholar] [CrossRef]
  201. Gao, K.; Yu, A.; You, X.; Guo, W.; Li, K.; Huang, N. Integrating Multiple Sources Knowledge for Class Asymmetry Domain Adaptation Segmentation of Remote Sensing Images. arXiv 2023, arXiv:2305.09893. [Google Scholar]
  202. Han, Z.; Zhang, Z.; Wang, F.; He, R.; Su, W.; Xi, X.; Yin, Y. Discriminability and Transferability Estimation: A Bayesian Source Importance Estimation Approach for Multi-Source-Free Domain Adaptation. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington DC, USA, 7–14 February 2023; pp. 7811–7820. [Google Scholar]
  203. Zhang, Y.; Ren, Z.; Feng, K.; Yu, K.; Beer, M.; Liu, Z. Universal source-free domain adaptation method for cross-domain fault diagnosis of machines. Mech. Syst. Signal. Process 2023, 191, 110159. [Google Scholar] [CrossRef]
  204. Ding, Y.; Jia, M.; Zhuang, J.; Cao, Y.; Zhao, X.; Lee, C.-G. Deep imbalanced domain adaptation for transfer learning fault diagnosis of bearings under multiple working conditions. Reliab. Eng. Syst. Saf. 2023, 230, 108890. [Google Scholar] [CrossRef]
  205. Hong, Y.; Chern, W.-C.; Nguyen, T.V.; Cai, H.; Kim, H. Semi-supervised domain adaptation for segmentation models on different monitoring settings. Autom. Constr. 2023, 149, 104773. [Google Scholar] [CrossRef]
  206. Liu, S.; Zhu, C.; Li, Y.; Tang, W. WUDA: Unsupervised Domain Adaptation Based on Weak Source Domain Labels. In Proceedings of the ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Rhodes Island, Greece, 4–10 June 2023; pp. 1–5. [Google Scholar]
  207. Cai, Z.; Song, J.; Zhang, T.; Hu, C.; Jing, X.-Y. Local weight coupled network: Multi-modal unequal semi-supervised domain adaptation. In Multimedia Tools and Applications; Springer: Berlin/Heidelberg, Germany, 2023; pp. 1–27. [Google Scholar]
  208. Hernandez-Diaz, K.; Alonso-Fernandez, F.; Bigun, J. One-Shot Learning for Periocular Recognition: Exploring the Effect of Domain Adaptation and Data Bias on Deep Representations. arXiv 2023, arXiv:2307.05128. [Google Scholar] [CrossRef]
  209. Zhu, Y.; Rahman, M.M.; Alam, M.A.U. Augmenting Deep Learning Adaptation for Wearable Sensor Data through Combined Temporal-Frequency Image Encoding. arXiv 2023, arXiv:2307.00883. [Google Scholar]
  210. Zhang, Z.; Xu, Y.; Song, J.; Zhou, Q.; Rasol, J.; Ma, L. Planet craters detection based on unsupervised domain adaptation. IEEE Trans. Aerosp. Electron. Syst. 2023, 2023, 3285512. [Google Scholar] [CrossRef]
  211. Chen, Y.; Fang, X.; Liu, Y.; Zheng, W.; Kang, P.; Han, N.; Xie, S. Two-Step Strategy for Domain Adaptation Retrieval. IEEE Trans. Knowl. Data Eng. 2023, 2023, 3289882. [Google Scholar] [CrossRef]
  212. Lee, J.; Lee, G. Feature Alignment by Uncertainty and Self-Training for Source-Free Unsupervised Domain Adaptation. Neural Netw. 2023, 161, 682–692. [Google Scholar] [CrossRef]
  213. Aich, A.; Peng, K.C.; Roy-Chowdhury, A.K. Cross-Domain Video Anomaly Detection without Target Domain Adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 2579–2591. [Google Scholar]
  214. Lopez-Rodriguez, A.; Mikolajczyk, K. Desc: Domain adaptation for depth estimation via semantic consistency. Int. J. Comput. Vis. 2023, 131, 752–771. [Google Scholar] [CrossRef]
  215. Zhang, Y.; Wu, J.; Zhang, Q.; Hu, X. Multi-view Feature Learning for the Over-penalty in Adversarial Domain Adaptation. Data Intell. 2023, 1–16. [Google Scholar] [CrossRef]
  216. Zhu, Y.; Wu, X.; Qiang, J.; Yuan, Y.; Li, Y. Representation learning via an integrated autoencoder for unsupervised domain adaptation. Front. Comput. Sci. 2023, 17, 175334. [Google Scholar] [CrossRef]
  217. Chamarthi, S.; Fogelberg, K.; Maron, R.C.; Brinker, T.J.; Niebling, J. Mitigating the Influence of Domain Shift in Skin Lesion Classification: A Benchmark Study of Unsupervised Domain Adaptation Methods on Dermoscopic Images. arXiv 2023, arXiv:2310.03432. [Google Scholar]
  218. Kumar, V.; Lal, R.; Patil, H.; Chakraborty, A. Conmix for source-free single and multi-target domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 4178–4188. [Google Scholar]
  219. Zara, G.; Roy, S.; Rota, P.; Ricci, E. AutoLabel: CLIP-based framework for Open-set Video Domain Adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 11504–11513. [Google Scholar]
  220. Wang, J.; Zhang, X.-L. Improving pseudo labels with intra-class similarity for unsupervised domain adaptation. Pattern Recognit. 2023, 138, 109379. [Google Scholar] [CrossRef]
  221. Ragab; Eldele, E.; Wu, M.; Foo, C.-S.; Li, X.; Chen, Z. Source-Free Domain Adaptation with Temporal Imputation for Time Series Data. arXiv 2023, arXiv:2307.07542. [Google Scholar]
  222. Shi, Y.; Wu, K.; Han, Y.; Shao, Y.; Li, B.; Wu, F. Source-free and Black-box Domain Adaptation via Distributionally Adversarial Training. Pattern Recognit. 2023, 2023, 109750. [Google Scholar] [CrossRef]
  223. Wang, T.; Liu, Z.; Ou, W.; Huo, H. Domain adaptation based on feature fusion and multi-attention mechanism. Comput. Electr. Eng. 2023, 108, 108726. [Google Scholar] [CrossRef]
  224. Tinn, P. Cross-domain adaptation and geometric data synthesis for near-eye to remote gaze tracking. Doctoral Dissertation, University of Texas at Austin, Austin, TX, USA, 2023. [Google Scholar] [CrossRef]
  225. Zhao, W.; Persello, C.; Stein, A. Semantic-aware unsupervised domain adaptation for height estimation from single-view aerial images. ISPRS J. Photogramm. Remote Sens. 2023, 196, 372–385. [Google Scholar] [CrossRef]
  226. Liu, H.; Liu, Y.; Mu, T.-J.; Huang, X.; Hu, S.-M. Skeleton-CutMix: Mixing Up Skeleton with Probabilistic Bone Exchange for Supervised Domain Adaptation. IEEE Trans. Image Process. 2023, 2023, 3293766. [Google Scholar] [CrossRef]
  227. Ning, Y.; Peng, J.; Liu, Q.; Huang, Y.; Sun, W.; Du, Q. Contrastive Learning based on Category Matching for Domain Adaptation in Hyperspectral Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 2023, 3295357. [Google Scholar] [CrossRef]
  228. Li, Y.; Zhan, X.; Liu, S.; Lu, H.; Jiang, R.; Guo, W.; Chapman, S.; Ge, Y.; Solan, B.; Ding, Y.; et al. Self-supervised plant phenotyping by combining domain adaptation with 3D plant model simulations: Application to wheat leaf counting at seedling stage. Plant Phenomics 2023, 5, 41. [Google Scholar] [CrossRef]
  229. Li, K.; Patel, D.; Kruus, E.; Min, M.R. Source-Free Video Domain Adaptation With Spatial-Temporal-Historical Consistency Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 14643–14652. [Google Scholar]
  230. Mishra, S.; Sanodiya, R.K. A Novel Angular Based Unsupervised Domain Adaptation Framework for Image Classification. IEEE Trans. Artif. Intell. 2023, 2023, 3293077. [Google Scholar] [CrossRef]
  231. Wang, B.; Gu, Y.; Lu, Y. A Multi-scale Domain Adaptive Framework for Scene Text Detection. In Proceedings of the 2023 3rd International Conference on Neural Networks, Information and Communication Engineering (NNICE), Guangzhou, China, 24–26 February 2023; pp. 347–351. [Google Scholar]
  232. Xia, Y.; Yun, L.-J.; Yang, C. Transferable adversarial masked self-distillation for unsupervised domain adaptation. Complex. Intell. Syst. 2023, 1–14. [Google Scholar] [CrossRef]
  233. Thopalli, K.; Anirudh, R.; Turaga, P.; Thiagarajan, J.J. The Surprising Effectiveness of Deep Orthogonal Procrustes Alignment in Unsupervised Domain Adaptation. IEEE Access 2023, 11, 12858–12869. [Google Scholar] [CrossRef]
  234. Ren, W.; Chen, Q.; Yang, Z. Adversarial discriminative domain adaptation for modulation classification based on Ulam stability. IET Radar Sonar Navig. 2023, 17, 1175–1181. [Google Scholar] [CrossRef]
  235. Sabha, A.; Selwal, A. Domain adaptation assisted automatic real-time human-based video summarization. Eng. Appl. Artif. Intell. 2023, 124, 106584. [Google Scholar] [CrossRef]
  236. Li, Y.; Wang, S.; Wang, B. Dual teacher–student based separation mechanism for open set domain adaptation. Knowl. Based Syst. 2023, 272, 110600. [Google Scholar] [CrossRef]
  237. Kothandaraman, D.; Shekhar, S.; Sancheti, A.; Ghuhan, M.; Shukla, T.; Manocha, D. SALAD: Source-free Active Label-Agnostic Domain Adaptation for Classification, Segmentation and Detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 382–391. [Google Scholar]
  238. Zhang, Y.; Wang, Z.; He, W. Class Relationship Embedded Learning for Source-Free Unsupervised Domain Adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 7619–7629. [Google Scholar]
  239. Acharya; Tatli, C.J.; Khoshelham, K. Synthetic-real image domain adaptation for indoor camera pose regression using a 3D model. ISPRS J. Photogramm. Remote Sens. 2023, 202, 405–421. [Google Scholar] [CrossRef]
  240. Chhabra, S.; Venkateswara, H.; Li, B. Generative Alignment of Posterior Probabilities for Source-free Domain Adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 4125–4134. [Google Scholar]
  241. Luo, Y.-W.; Ren, C.-X. MOT: Masked Optimal Transport for Partial Domain Adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 3531–3540. [Google Scholar]
  242. Shenaj, D.; Fanì, E.; Toldo, M.; Caldarola, D.; Tavera, A.; Michieli, U.; Ciccone, M.; Zanuttigh, P.; Caputo, B. Learning across domains and devices: Style-driven source-free domain adaptation in clustered federated learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 444–454. [Google Scholar]
  243. Piva, J.; de Geus, D.; Dubbelman, G. Empirical generalization study: Unsupervised domain adaptation vs. domain generalization methods for semantic segmentation in the wild. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 499–508. [Google Scholar]
  244. Ma, N.; Wang, H.; Zhang, Z.; Zhou, S.; Chen, H.; Bu, J. Source-free semi-supervised domain adaptation via progressive Mixup. Knowl. Based Syst. 2023, 262, 110208. [Google Scholar] [CrossRef]
  245. Zhang, C.; Liu, B.; Xin, Y.; Yao, L. CPVD: Cross Project Vulnerability Detection Based On Graph Attention Network And Domain Adaptation. IEEE Trans. Softw. Eng. 2023, 2023, 3285910. [Google Scholar] [CrossRef]
  246. Wei, G.; Li, X.; Huang, L.; Nie, J.; Wei, Z. Unsupervised domain adaptation via reliable pseudolabeling based memory module and dynamic distance threshold learning. Knowl. Based Syst. 2023, 2023, 110667. [Google Scholar] [CrossRef]
  247. Kim, G.; Chun, S.Y. Datid-3d: Diversity-preserved domain adaptation using text-to-image diffusion for 3d generative model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 14203–14213. [Google Scholar]
  248. Li, W.; Liu, J.; Han, B.; Yuan, Y. Adjustment and Alignment for Unbiased Open Set Domain Adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 24110–24119. [Google Scholar]
  249. Piva, F.J.; Dubbelman, G. Exploiting image translations via ensemble self-supervised learning for unsupervised domain adaptation. In Computer Vision and Image Understanding; Elsevier: Amsterdam, The Netherlands, 2023; p. 103745. [Google Scholar]
  250. Yang, S.; Wang, Y.; Herranz, L.; Jui, S.; van de Weijer, J. Casting a BAIT for offline and online source-free domain adaptation. In Computer Vision and Image Understanding; Elsevier: Amsterdam, The Netherlands, 2023; p. 103747. [Google Scholar]
  251. Moreu, E.; Martinelli, A.; Naughton, M.; Kelly, P.; O’Connor, N.E. Fashion CUT: Unsupervised domain adaptation for visual pattern classification in clothes using synthetic data and pseudo-labels. In Scandinavian Conference on Image Analysis; Springer Nature: Cham, Switzerland, 2023; pp. 314–324. [Google Scholar]
  252. Zhu, D.; Li, Y.; Yuan, J.; Li, Z.; Kuang, K.; Wu, C. Universal Domain Adaptation via Compressive Attention Matching. arXiv 2023, arXiv:2304.11862. [Google Scholar]
  253. Ustun, B.; Kaya, A.K.; Ayerden, E.C.; Altinel, F. Spectral Transfer Guided Active Domain Adaptation For Thermal Imagery. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 449–458. [Google Scholar]
  254. Boudiaf, M.; Denton, T.; van Merriënboer, B.; Dumoulin, V.; Triantafillou, E. In Search for a Generalizable Method for Source Free Domain Adaptation. arXiv 2023, arXiv:2302.06658. [Google Scholar]
  255. Yang, J.; Liu, J.; Xu, N.; Huang, J. Tvt: Transferable vision transformer for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 520–530. [Google Scholar]
  256. Dai, Q.; Wong, Y.; Sun, G.; Wang, Y.; Zhou, Z.; Kankanhalli, M.S.; Li, X.; Geng, W. Unsupervised Domain Adaptation by Causal Learning for Biometric Signal based HCI. ACM Trans. Multimed. Comput. Commun. Appl. 2023. [Google Scholar] [CrossRef]
  257. Yu, Z.; Li, J.; Du, Z.; Zhu, L.; Shen, H.T. A Comprehensive Survey on Source-free Domain Adaptation. arXiv 2023, arXiv:2302.11803. [Google Scholar]
  258. Siry, R.; Hémadou, L.; Simon, L.; Jurie, F. On the inductive biases of deep domain adaptation. Comput. Vis. Image Underst. 2023, 233, 103714. [Google Scholar] [CrossRef]
  259. Han, Z.; Su, W.; He, R.; Yin, Y. SNAIL: Semi-Separated Uncertainty Adversarial Learning for Universal Domain Adaptation. In Proceedings of the Asian Conference on Machine Learning, Hyderabad, India, 12–14 December 2022; pp. 436–451. [Google Scholar]
  260. Zhu, J.; Bai, H.; Wang, L. Patch-Mix Transformer for Unsupervised Domain Adaptation: A Game Perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 3561–3571. [Google Scholar]
  261. Zheng, X.; Zhu, J.; Liu, Y.; Cao, Z.; Fu, C.; Wang, L. Both Style and Distortion Matter: Dual-Path Unsupervised Domain Adaptation for Panoramic Semantic Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 1285–1295. [Google Scholar]
  262. Chen, Q.; Marchand, M. Algorithm-Dependent Bounds for Representation Learning of Multi-Source Domain Adaptation. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Palau de Congressos, Valencia, Spain, 25–27 April 2023; pp. 10368–10394. [Google Scholar]
  263. Maurya, J.; Ranipa, K.R.; Yamaguchi, O.; Shibata, T.; Kobayashi, D. Domain Adaptation using Self-Training with Mixup for One-Stage Object Detection. In Proceedings of the 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Waikoloa, HI, USA, 2–7 January 2023; pp. 4178–4187. [Google Scholar]
  264. Truong, T.-D.; Le, N.; Raj, B.; Cothren, J.; Luu, K. Fredom: Fairness domain adaptation approach to semantic scene understanding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 19988–19997. [Google Scholar]
  265. Devika, K.; Sanodiya, R.K.; Jose, B.R.; Mathew, J. Visual Domain Adaptation through Locality Information. Eng. Appl. Artif. Intell. 2023, 123, 106172. [Google Scholar]
  266. Xia, S.; Huang, H.; Li, Q.; He, Y. Prototype-guided Unsupervised Domain Adaptation for Semantic Segmentation. In Proceedings of the 2023 IEEE International Conference on Control, Electronics and Computer Technology (ICCECT), Jilin, China, 28–30 April 2023; pp. 1147–1151. [Google Scholar]
  267. Wang, Z.; Liu, X.; Suganuma, M.; Okatani, T. Unsupervised domain adaptation for semantic segmentation via cross-region alignment. In Computer Vision and Image Understanding; Elsevier: Amsterdam, The Netherlands, 2023; p. 103743. [Google Scholar]
  268. Liang, C.; Cheng, B.; Xiao, B.; Dong, Y. Unsupervised Domain Adaptation for Remote Sensing Image Segmentation Based on Adversarial Learning and Self-Training. IEEE Geosci. Remote Sens. Lett. 2023, 2023, 103743. [Google Scholar] [CrossRef]
  269. Yang, Z.; Soltani, I.; Darve, E. Anomaly detection with domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 2957–2966. [Google Scholar]
  270. Essich, M.; Rehmann, M.; Curio, C. Auxiliary Task-Guided CycleGAN for Black-Box Model Domain Adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 541–550. [Google Scholar]
  271. Zeng, Z.; Li, D.; Yang, X. Deep Domain Adaptation Using Cascaded Learning Networks and Metric Learning. IEEE Access 2023, 11, 3564–3572. [Google Scholar] [CrossRef]
  272. Ahmed, W.; Morerio, P.; Murino, V. Continual Source-Free Unsupervised Domain Adaptation. arXiv 2023, arXiv:2304.07374. [Google Scholar]
  273. Tang, H.; Jia, K. A New Benchmark: On the Utility of Synthetic Data with Blender for Bare Supervised Learning and Downstream Domain Adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 15954–15964. [Google Scholar]
  274. Park, D.; Kim, M.; Kim, H.; Lee, J.; Chun, S.Y. Domain adaptation from posteroanterior to anteroposterior X-ray radiograph classification via deep neural converter with label recycling. In Proceedings of the 2023 International Conference on Electronics, Information, and Communication (ICEIC), Singapore, 5–8 February 2023; pp. 1–4. [Google Scholar]
  275. Liu, Y.; Qiao, L.; Lu, C.; Yin, D.; Lin, C.; Peng, H.; Ren, B. OSAN: A One-Stage Alignment Network To Unify Multimodal Alignment and Unsupervised Domain Adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 3551–3560. [Google Scholar]
  276. Litrico, M.; Del Bue, A.; Morerio, P. Guiding Pseudo-Labels With Uncertainty Estimation for Source-Free Unsupervised Domain Adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 7640–7650. [Google Scholar]
  277. Xu, R.; Wang, C.; Xu, S.; Meng, W.; Zhang, Y.; Fan, B.; Zhang, X. DomainFeat: Learning Local Features with Domain Adaptation. IEEE Trans. Circuits Syst. Video Technol. 2023, 2023, 3282956. [Google Scholar] [CrossRef]
  278. Li, Y.; Liu, Y.; Zheng, D.; Huang, Y.; Tang, Y. Discriminable feature enhancement for unsupervised domain adaptation. In Image and Vision Computing; Elsevier: Amsterdam, The Netherlands, 2023; p. 104755. [Google Scholar]
  279. Weng, X.; Huang, Y.; Li, Y.; Yang, H.; Yu, S. Unsupervised domain adaptation for crack detection. Autom. Constr. 2023, 153, 104939. [Google Scholar] [CrossRef]
  280. Alcover-Couso, R.; SanMiguel, J.C.; Escudero-Viñolo, M.; Garcia-Martin, A. On exploring weakly supervised domain adaptation strategies for semantic segmentation using synthetic data. In Multimedia Tools and Applications; Springer: Berlin/Heidelberg, Germany, 2023; pp. 1–33. [Google Scholar]
  281. Ajith, A.; Gopakumar, G. Domain Adaptation: A Survey. In Computer Vision and Machine Intelligence: Proceedings of CVMI 2022; Springer: Berlin/Heidelberg, Germany, 2023; pp. 591–602. [Google Scholar]
  282. Na, J.; Han, D.; Chang, H.J.; Hwang, W. Contrastive vicinal space for unsupervised domain adaptation. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2022; pp. 92–110. [Google Scholar]
  283. Reddy, A.V.; Shah, K.; Paul, W.; Mocharla, R.; Hoffman, J.; Katyal, K.D.; Manocha, D.; de Melo, C.M.; Chellappa, R. Synthetic-to-real domain adaptation for action recognition: A dataset and baseline performances. arXiv 2023, arXiv:2303.10280. [Google Scholar]
  284. Li, Z.; Togo, R.; Ogawa, T.; Haseyama, M. Learning intra-domain style-invariant representation for unsupervised domain adaptation of semantic segmentation. Pattern Recognit. 2022, 132, 108911. [Google Scholar] [CrossRef]
  285. Yue, Z.; Zeng, H.; Kou, Z.; Shang, L.; Wang, D. Contrastive domain adaptation for early misinformation detection: A case study on COVID-19. In Proceedings of the 31st ACM International Conference on Information & Knowledge Management, Atlanta, GA, USA, 17–21 October 2022; pp. 2423–2433. [Google Scholar]
  286. Tian, Q.; Peng, S.; Sun, H.; Zhou, J.; Zhang, H. Source-free unsupervised domain adaptation with maintaining model balance and diversity. Comput. Electr. Eng. 2022, 104, 108408. [Google Scholar] [CrossRef]
  287. Shin, H.; Kim, H.; Kim, S.; Jun, Y.; Eo, T.; Hwang, D. SDC-UDA: Volumetric Unsupervised Domain Adaptation Framework for Slice-Direction Continuous Cross-Modality Medical Image Segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 7412–7421. [Google Scholar]
  288. Ren, W.; Yang, Z.; Wang, X. A two-branch symmetric domain adaptation neural network based on Ulam stability theory. Inf. Sci. N. Y. 2023, 628, 424–438. [Google Scholar] [CrossRef]
  289. Zhang, S.; Su, L.; Gu, J.; Li, K.; Zhou, L.; Pecht, M. Rotating machinery fault detection and diagnosis based on deep domain adaptation: A survey. Chin. J. Aeronaut. 2023, 36, 45–74. [Google Scholar] [CrossRef]
  290. Shen, Y.; Yang, Y.; Yan, M.; Wang, H.; Zheng, Y.; Guibas, L.J. Domain adaptation on point clouds via geometry-aware implicits. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 7223–7232. [Google Scholar]
  291. Ding, R.; Yang, J.; Jiang, L.; Qi, X. DODA: Data-Oriented Sim-to-Real Domain Adaptation for 3D Semantic Segmentation. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2022; pp. 284–303. [Google Scholar]
  292. Yoo, J.; Chung, I.; Kwak, N. Unsupervised domain adaptation for one-stage object detector using offsets to bounding box. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2022; pp. 691–708. [Google Scholar]
  293. Hao, Z.; Liang, T. Source-Free Unsupervised Domain Adaptation via Denoising Mutual Learning. In Proceedings of the 2022 19th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP), Chengdu, China, 16–18 December 2022; pp. 1–7. [Google Scholar]
  294. Bashkirova, D.; Mishra, S.; Lteif, D.; Teterwak, P.; Kim, D.; Alladkani, F.; Akl, J.; Calli, B.; Bargal, S.A.; Saenko, K.; et al. VisDA 2022 Challenge: Domain Adaptation for Industrial Waste Sorting. arXiv 2023, arXiv:2303.14828. [Google Scholar]
  295. Ouyang, J.; Zhang, Z.; Meng, Q.; Li, X.; Thanh, D.N.H. Adaptive prototype and consistency alignment for semi-supervised domain adaptation. In Multimedia Tools and Applications; Springer: Berlin/Heidelberg, Germany, 2023; pp. 1–22. [Google Scholar]
  296. Xi, Z.; He, X.; Meng, Y.; Yue, A.; Chen, J.; Deng, Y.; Chen, J. A Multilevel-Guided Curriculum Domain Adaptation Approach to Semantic Segmentation for High-Resolution Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023. [Google Scholar] [CrossRef]
  297. Xiao, L.; Xu, J.; Zhao, D.; Shang, E.; Zhu, Q.; Dai, B. Adversarial and Random Transformations for Robust Domain Adaptation and Generalization. Sensors 2023, 23, 5273. [Google Scholar] [CrossRef] [PubMed]
  298. Dan, J.; Jin, T.; Chi, H.; Shen, Y.; Yu, J.; Zhou, J. HOMDA: High-order moment-based domain alignment for unsupervised domain adaptation. Knowl. Based Syst. 2023, 261, 110205. [Google Scholar] [CrossRef]
  299. Wang, X.; Xu, Y.; Yang, J.; Mao, K.; Li, X.; Chen, Z. Confidence Attention and Generalization Enhanced Distillation for Continuous Video Domain Adaptation. arXiv 2023, arXiv:2303.10452. [Google Scholar]
  300. Kuznietsov, Y.; Proesmans, M.; Van Gool, L. Towards unsupervised online domain adaptation for semantic segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikola, HI, USA, 3–8 January 2022; pp. 261–271. [Google Scholar]
  301. Rahman, M.; Panda, R.; Alam, M.A.U. Semi-Supervised Domain Adaptation with Auto-Encoder via Simultaneous Learning. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 402–411. [Google Scholar]
  302. Duan, Y.; Tu, J.; Chen, C. SGDA: A Saliency-Guided Domain Adaptation Network for Nighttime Semantic Segmentation. In Proceedings of the 2023 IEEE 6th International Conference on Industrial Cyber-Physical Systems (ICPS), Wuhan, China, 8–11 May 2023; pp. 1–6. [Google Scholar]
  303. Li, W.; Fan, K.; Yang, H. Teacher–Student Mutual Learning for efficient source-free unsupervised domain adaptation. Knowl. Based Syst. 2023, 261, 110204. [Google Scholar] [CrossRef]
  304. Zhang, D.; Ye, M.; Liu, Y.; Xiong, L.; Zhou, L. Multi-source unsupervised domain adaptation for object detection. Inf. Fusion 2022, 78, 138–148. [Google Scholar] [CrossRef]
  305. Ahn, W.J.; Kang, G.; Choi, H.D.; Lim, M.T. Domain Adaptation for Complex Shadow Removal with Shadow Transformer Network. In Neurocomputing; Elsevier: Amsterdam, The Netherlands, 2023; p. 126559. [Google Scholar]
  306. Xie, M.; Li, Y.; Wang, Y.; Luo, Z.; Gan, Z.; Sun, Z.; Chi, M.; Wang, C.; Wang, P. Learning distinctive margin toward active domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 7993–8002. [Google Scholar]
  307. Chang, W.; Shi, Y.; Tuan, H.; Wang, J. Unified optimal transport framework for universal domain adaptation. Adv. Neural Inf. Process. Syst. 2022, 35, 29512–29524. [Google Scholar]
  308. Kalluri, T.; Chandraker, M. Cluster-to-adapt: Few shot domain adaptation for semantic segmentation across disjoint labels. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 4121–4131. [Google Scholar]
  309. Wu, X.; Fan, X.; Luo, P.; Choudhury, S.D.; Tjahjadi, T.; Hu, C. From Laboratory to Field: Unsupervised Domain Adaptation for Plant Disease Recognition in the Wild. Plant Phenomics 2023, 5, 38. [Google Scholar] [CrossRef]
  310. Xia, H.; Wang, P.; Ding, Z. Incomplete Multi-view Domain Adaptation via Channel Enhancement and Knowledge Transfer. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2022; pp. 200–217. [Google Scholar]
  311. Farahani, A.; Voghoei, S.; Rasheed, K.; Arabnia, H.R. A Brief Review of Domain Adaptation. arXiv 2020, arXiv:2010.03978. [Google Scholar]
  312. Liu, X.; Yoo, C.; Xing, F.; Oh, H.; Fakhri, G.E.; Kang, J.-W.; Woo, J. Deep Unsupervised Domain Adaptation: A Review of Recent Advances and Perspectives. arXiv 2022, arXiv:2208.07422. [Google Scholar] [CrossRef]
  313. Kouw, W.M.; Loog, M. An introduction to domain adaptation and transfer learning. arXiv 2019, arXiv:1812.11806. [Google Scholar]
  314. Li, J.; Xu, R.; Ma, J.; Zou, Q.; Ma, J.; Yu, H. Domain Adaptation based Enhanced Detection for Autonomous Driving in Foggy and Rainy Weather, Computer Vision and Pattern Recognition. arXiv 2023, arXiv:2307.09676. [Google Scholar] [CrossRef]
  315. Csurka, G. Domain adaptation for visual applications: A comprehensive survey. arXiv 2017, arXiv:1702.05374. [Google Scholar]
  316. Gao, H.; Guo, J.; Wang, G.; Zhang, Q. Cross-domain correlation distillation for unsupervised domain adaptation in nighttime semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 9913–9923. [Google Scholar]
  317. Xu, L.; Bennamoun; Boussaid, F.; Laga, H.; Ouyang, W.; Xu, D. MCTformer+: Multi-Class Token Transformer for Weakly Supervised Semantic Segmentation. arXiv 2023, arXiv:2308.03005. [Google Scholar]
  318. Meng, M.; Zhang, T.; Zhang, Z.; Zhang, Y.; Wu, F. Adversarial Transformers for Weakly Supervised Object Localization. IEEE Trans. Image Process. 2022, 31, 7130–7143. [Google Scholar] [CrossRef] [PubMed]
  319. Brodeur, A.; Clark, A.E.; Fleche, S.; Powdthavee, N. COVID-19, lockdowns and well-being: Evidence from Google Trends. J. Public Econ. 2021, 193, 104346. [Google Scholar] [CrossRef] [PubMed]
  320. Rao, A.; Sharma, G.D.; Pereira, V.; Shahzad, U.; Jabeen, F. Analyzing cyberchondriac Google Trends data to forecast waves and avoid friction: Lessons from COVID-19 in India. IEEE Trans. Eng. Manag. 2022, 2022, 3147375. [Google Scholar] [CrossRef]
Figure 1. SLR methodology flow diagram.
Figure 1. SLR methodology flow diagram.
Applsci 13 12823 g001
Figure 2. Google Trends—region-wise interest in domain adaptation, computer vision, and robotic vision (percentage of searches is represented by color intensity).
Figure 2. Google Trends—region-wise interest in domain adaptation, computer vision, and robotic vision (percentage of searches is represented by color intensity).
Applsci 13 12823 g002
Figure 3. Source—relaxed domain adaptation for image segmentation [30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49].
Figure 3. Source—relaxed domain adaptation for image segmentation [30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49].
Applsci 13 12823 g003
Figure 4. Domain adaptation with contrastive learning for object identification [33,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68].
Figure 4. Domain adaptation with contrastive learning for object identification [33,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68].
Applsci 13 12823 g004
Figure 5. Domain adaption for object classification [68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85].
Figure 5. Domain adaption for object classification [68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85].
Applsci 13 12823 g005
Figure 6. Adversarial Discriminative Domain Adaptation where the unsupervised domain adaptation method combines adversarial learning with discriminative feature learning.
Figure 6. Adversarial Discriminative Domain Adaptation where the unsupervised domain adaptation method combines adversarial learning with discriminative feature learning.
Applsci 13 12823 g006
Figure 7. Deep learning-based domain adaptable model’s mechanism.
Figure 7. Deep learning-based domain adaptable model’s mechanism.
Applsci 13 12823 g007
Figure 8. Shift in domain due to change in weather.
Figure 8. Shift in domain due to change in weather.
Applsci 13 12823 g008
Figure 9. An illustration of the assessment of a robot’s performance in an unknown task through domain adaptation.
Figure 9. An illustration of the assessment of a robot’s performance in an unknown task through domain adaptation.
Applsci 13 12823 g009
Table 1. Region-wise comparison of domain adaptation research papers.
Table 1. Region-wise comparison of domain adaptation research papers.
RegionNumber of Papers
Asia180
Europe66
North America104
South America46
Oceania31
Table 2. Prior works on domain adaptation for image segmentation.
Table 2. Prior works on domain adaptation for image segmentation.
ReferencesTitleLast AuthorYearCitationsGraph Citations
[30]CyCADA: Cycle-Consistent Adversarial Domain AdaptationTrevor Darrell2017244529
[31]Adversarial Discriminative Domain AdaptationTrevor Darrell2017378425
[32]Deep Residual Learning for Image RecognitionJian Sun2015141,51123
[33]DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFsA. Yuille201613,62719
[34]The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban ScenesAntonio M. López2016178418
[35]Adam: A Method for Stochastic OptimizationJimmy Ba2014124,70018
[36]Playing for Data: Ground Truth from Computer GamesV. Koltun2016155418
[37]The Cityscapes Dataset for Semantic Urban Scene UnderstandingB. Schiele2016868618
[38]Learning Transferable Features with Deep Adaptation NetworksMichael I. Jordan2015399017
[39]Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial NetworksAlexei A. Efros2017429616
Table 3. Derivative works on domain adaptation for image segmentation.
Table 3. Derivative works on domain adaptation for image segmentation.
ReferencesTitleLast AuthorYearCitationsGraph
[40]Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectivesPravendra Singh2023018
[41]A Comprehensive Survey on Test Time Adaptation under Distribution ShiftsTien-Ping Tan20231818
[42]Towards Better Stability and Adaptability: Improve Online Self-Training for Model Adaptation in Semantic SegmentationLicheng Jiao2023015
[43]Source-Free Unsupervised Domain Adaptation: A SurveyMingxia Liu2023615
[44]Source Data-Free Cross-Domain Semantic Segmentation: Align, Teach and PropagateZhaoxiang Zhang2021415
[45]Unsupervised Adaptation of Semantic Segmentation Models without Source DataG. Aggarwal2021715
[46]Domain Adaptive Semantic Segmentation without Source Data: Align, Teach and PropagateZhaoxiang Zhang2021214
[47]Unsupervised Domain Adaptation for Semantic Image Segmentation: a Comprehensive SurveyBoris Chidlovskii20212314
[48]Self-training via Metric Learning for Source-Free Domain Adaptation of Semantic SegmentationU. Halici2022213
[49]Semantic Image Segmentation: Two Decades of ResearchBoris Chidlovskii2023713
Table 4. Prior works on domain adaptation for object identification.
Table 4. Prior works on domain adaptation for object identification.
ReferencesTitleLast AuthorYearCitationsGraph Citations
[50]Faster R-CNN: Towards Real-Time Object Detection with Region Proposal NetworksJian Sun201546,87140
[55]You Only Look Once: Unified, Real-Time Object DetectionAli Farhadi201525,55235
[56]Focal Loss for Dense Object DetectionPiotr Dollár201715,88335
[51]SSD: Single Shot MultiBox DetectorA. Berg201521,99734
[52]Deep Residual Learning for Image RecognitionJian Sun2015141,51133
[53]Microsoft COCO: Common Objects in ContextC. L. Zitnick201430,07833
[33]Feature Pyramid Networks for Object DetectionSerge J. Belongie201615,50933
[57]YOLOv4: Optimal Speed and Accuracy of Object DetectionH. Liao2020728731
[58]YOLOv3: An Incremental ImprovementAli Farhadi201814,61529
[59]YOLO9000: Better, Faster, StrongerAli Farhadi201611,72125
Table 5. Derivative works on domain adaptation for object identification.
Table 5. Derivative works on domain adaptation for object identification.
ReferencesTitleLast AuthorYearCitationsGraph References
[54]Towards Large-Scale Small Object Detection: Survey and BenchmarksJunwei Han2022345
[58]Semi-Supervised Person Detection in Aerial Images with Instance Segmentation and Maximum Mean Discrepancy DistanceMingyi He202314
[60]A unified and costless approach for improving small and long-tail object detection in aerial images of traffic scenariosXinkai Wu202214
[61]Pareto Refocusing for Drone-View Object DetectionXinbo Gao202334
[62]RFLA: Gaussian Receptive Field based Label Assignment for Tiny Object DetectionGuisong Xia2022244
[63]DB-YOLOv5: A UAV Object Detection Model Based on Dual Backbone Network for Security SurveillanceXujie Jiang202303
[64]Small object detection leveraging density-aware scale adaptationZ. Gao202303
[65]CFANet: Efficient Detection of UAV Image Based on Cross-Layer Feature AggregationWei Li202313
[66]OGMN: Occlusion-guided Multi-task Network for Object Detection in UAV ImagesXian Sun202313
[67]Object Detection for UAV Aerial Scenarios Based on Vectorized IOUShuang Wu202333
Table 6. Prior works on domain adaption for object classification.
Table 6. Prior works on domain adaption for object classification.
ReferencesTitleLast AuthorYearCitationsGraph
[68]Learning Deep Features for Discriminative LocalizationA. Torralba2015720741
[69]Adversarial Complementary Learning for Weakly Supervised Object LocalizationThomas S. Huang201845541
[70]Self-produced Guidance for Weakly-supervised Object LocalizationThomas Huang201821040
[75]Hide-and-Seek: Forcing a Network to be Meticulous for Weakly-Supervised Object and Action LocalizationYong Jae Lee201754139
[76]CutMix: Regularization Strategy to Train Strong Classifiers With Localizable FeaturesY. Yoo2019299338
[71]Attention-Based Dropout Layer for Weakly Supervised Object LocalizationHyunjung Shim201927737
[72]DANet: Divergent Activation for Weakly Supervised Object LocalizationQixiang Ye201911230
[73]The Caltech-UCSD Birds-200–2011 DatasetSerge J. Belongie2011363530
[74]Deep Residual Learning for Image RecognitionJian Sun2015141,51129
Table 7. Derivative works on domain adaption for object classification.
Table 7. Derivative works on domain adaption for object classification.
ReferencesTitleLast AuthorYearCitationsGraph
[77]Generative Prompt Model for Weakly Supervised Object LocalizationFang Wan2023414
[78]Open-World Weakly-Supervised Object LocalizationMike Zheng Shou2023111
[79]Further Improving Weakly-supervised Object Localization via Causal Knowledge DistillationJun Xiao2023111
[80]What is Where by Looking: Weakly-Supervised Open-World Phrase-Grounding without Text InputsLior Wolf202259
[81]Learning Multi-Modal Class-Specific Tokens for Weakly Supervised Dense Object LocalizationDan Xu202328
[82]Deep Learning for Weakly-Supervised Object Detection and Localization: A SurveyJun Xiao2022398
[83]Rethinking the Localization in Weakly Supervised Object LocalizationYonggang Wen202307
[84]MCTformer+: Multi-Class Token Transformer for Weakly Supervised Semantic SegmentationDan Xu202307
[85]Adversarial Transformers for Weakly Supervised Object LocalizationFeng Wu202217
Table 8. Comparative analysis of models and algorithms used with domain adaption in computer/robotic vision.
Table 8. Comparative analysis of models and algorithms used with domain adaption in computer/robotic vision.
ModelStack/TechnologyAccuracyDomainContributionAdvantagesDisadvantagesUniqueness
ADDA [97]Adversarial88.80%Image classificationIntroduced an adversarial discriminator to transfer learningBetter performance than traditional transfer learningMore complex than traditional transfer learningImproved performance over traditional transfer learning
ADDA-DA [98]Adversarial89.60%Image classificationEnhanced ADDA by introducing a domain adversarial networkBetter performance on small datasetsMore complex than ADDAImproved performance on small datasets
ADDA-DA-IRM [99]Adversarial92.80%Image classificationEnhanced ADDA by introducing a domain adversarial networkBetter performance than ADDAMore complex than ADDAImproved performance over ADDA
ADDA-DA-RevGrad [100]Adversarial92.90%Image classificationEnhanced ADDA-DA by introducing an importance reweighting mechanismBetter performance than ADDA-DAMore complex than ADDA-DAImproved performance over ADDA-DA
CDAN [101]Adversarial87.50%Image classificationEnhanced DANN by introducing a conditional discriminatorBetter performance on small datasetsMore complex than DANNImproved performance on small datasets
CDAN-MMD [102]Adversarial90.30%Image classificationEnhanced CDAN by introducing a maximum mean discrepancy lossBetter performance than CDANMore complex than CDANImproved performance over CDAN
CDAN-MMD-IRM-RevGrad [103]Adversarial92.20%Image classificationEnhanced CDAN-MMD by introducing an importance reweighting mechanism, a reversal gradient technique, and a maximum mean discrepancy lossBetter performance than CDAN-MMDMore complex than CDAN-MMDImproved performance over CDAN-MMD
CDAN-RevGrad [104]Adversarial90.70%Image classificationEnhanced CDAN by introducing a reversal gradient techniqueBetter performance than CDANMore complex than CDANImproved performance over CDAN
CDAN-SA-IRM [105]Adversarial91.00%Image classificationEnhanced CDAN-SA by introducing an importance reweighting mechanismBetter performance than CDAN-SAMore complex than CDAN-SAImproved performance over CDAN-SA
CDAN-SA-IRM-CTC [106]Adversarial92.40%Text classificationEnhanced DANN-SA-IRM by introducing a CTC lossBetter performance than DANN-SA-IRMMore complex than DANN-SA-IRMImproved performance over DANN-SA-IRM
CDAN-SA-IRM-RevGrad [107]Adversarial91.50%Image classificationEnhanced CDAN-SA by introducing an importance reweighting mechanism and a reversal gradient techniqueBetter performance than CDAN-SAMore complex than CDAN-SAImproved performance over CDAN-SA
CDAN-SA-MMD-IRM-RevGrad [108]Adversarial91.60%Image classificationEnhanced CDAN-SA-MMD by introducing an importance reweighting mechanism, a reversal gradient technique, and a maximum mean discrepancy lossBetter performance than CDAN-SA-MMDMore complex than CDAN-SA-MMDImproved performance over CDAN-SA-MMD
CDAN-SA-MMD-IRM-RevGrad-MTL [109]Generative91.80%Image classificationEnhanced CoGAN by introducing an importance reweighting mechanismBetter performance than CoGANMore complex than CoGANImproved performance over CoGAN
CDAN-SA-MMD-SA-IRM-RevGrad-MTL [110]Adversarial92.20%Image classificationEnhanced DANN-SA by introducing a multi-task learning frameworkBetter performance than DANN-SAMore complex than DANN-SAImproved performance over DANN-SA
CDAN-SA-MTL [111]Adversarial91.90%Image classificationEnhanced DANN-SA by introducing a multi-task learning frameworkBetter performance than DANN-SAMore complex than DANN-SAImproved performance over DANN-SA
CDAN-SA-RevGrad [112]Adversarial91.10%Image classificationEnhanced CDAN-SA by introducing a reversal gradient techniqueBetter performance than CDAN-SAMore complex than CDAN-SAImproved performance over CDAN-SA
CoGAN [113]Generative89.50%Image classificationIntroduced a conditional generative adversarial network to domain adaptationBetter performance than DANNMore complex than DANNImproved performance over DANN
CoGAN-IRM [114]Generative92.30%Image classificationEnhanced CoGAN by introducing an importance reweighting mechanismBetter performance than CoGANMore complex than CoGANImproved performance over CoGAN
CoGAN-RevGrad [115]Generative92.40%Image classificationEnhanced CoGAN by introducing a reversal gradient techniqueBetter performance than CoGANMore complex than CoGANImproved performance over CoGAN
CoGAN-SA-IRM [116]Generative92.00%Image classificationEnhanced SimGAN by introducing an importance reweighting mechanismBetter performance than SimGANMore complex than SimGANImproved performance over SimGAN
CoGAN-SA-MMD-IRM-RevGrad [117]Generative92.40%Image classificationEnhanced CoGAN-SA by introducing an importance reweighting mechanismBetter performance than CoGAN-SAMore complex than CoGAN-SAImproved performance over CoGAN-SA
CycleGAN [118]Generative89.80%Image translationIntroduced a cycle-consistent adversarial network for image-to-image translationBetter performance than traditional image translation methodsMore complex than traditional image translation methodsImproved performance over traditional image translation methods
DANN [119]Adversarial86.80%Image classificationIntroduced the adversarial training framework for domain adaptationCan handle large domain shiftsSensitive to hyperparametersFirst to use adversarial training for domain adaptation
DANN-IRM [120]Adversarial90.50%Image classificationEnhanced DANN by introducing an importance reweighting mechanismBetter performance than DANNMore complex than DANNImproved performance over DANN
DANN-IRM-CTC [121]Adversarial92.10%Image classificationEnhanced DANN-SA-MMD-IRM-RevGrad by introducing a multi-task learning frameworkBetter performance than DANN-SA-MMD-IRM-RevGradMore complex than DANN-SA-MMD-IRM-RevGradImproved performance over DANN-SA-MMD-IRM-RevGrad
DANN-MMD [122]Adversarial90.20%Image classificationEnhanced DANN by introducing a maximum mean discrepancy lossBetter performance than DANNMore complex than DANNImproved performance over DANN
DANN-MTL [123]Adversarial93.00%Image classificationEnhanced ADDA-DA by introducing a reversal gradient techniqueBetter performance than ADDA-DAMore complex than ADDA-DAImproved performance over ADDA-DA
DANN-RevGrad [124]Adversarial90.60%Image classificationEnhanced DANN by introducing a reversal gradient techniqueBetter performance than DANNMore complex than DANNImproved performance over DANN
DANN-RevGrad-CTC [125]Adversarial92.20%Text classificationEnhanced DANN-IRM by introducing a CTC lossBetter performance than DANN-IRMMore complex than DANN-IRMImproved performance over DANN-IRM
DANN-SA [126]Adversarial90.10%Image classificationEnhanced DANN by introducing a self-attention mechanismBetter performance on small datasetsMore complex than DANNImproved performance on small datasets
DANN-SA-IRM [127]Adversarial90.80%Image classificationEnhanced DANN-SA by introducing an importance reweighting mechanismBetter performance than DANN-SAMore complex than DANN-SAImproved performance over DANN-SA
DANN-SA-IRM-CTC [128]Adversarial92.30%Text classificationEnhanced DANN-RevGrad by introducing a CTC lossBetter performance than DANN-RevGradMore complex than DANN-RevGradImproved performance over DANN-RevGrad
DANN-SA-IRM-RevGrad [129]Adversarial91.40%Image classificationEnhanced DANN-SA by introducing an importance reweighting mechanism and a reversal gradient techniqueBetter performance than DANN-SAMore complex than DANN-SAImproved performance over DANN-SA
DANN-SA-IRM-RevGrad- MTL [130]Adversarial92.80%Text classificationEnhanced MADA-SA-MMD-IRM-RevGrad by introducing a CTC lossBetter performance than MADA-SA-MMD-IRM-RevGradMore complex than MADA-SA-MMD-IRM-RevGradImproved performance over MADA-SA-MMD-IRM-
DANN-SA-MMD [131]Adversarial90.40%Image classificationEnhanced DANN-MMD by introducing a self-attention mechanismBetter performance than DANN-MMDMore complex than DANN-MMDImproved performance over DANN-MMD
DANN-SA-MMD-IRM-RevGrad-CTC [132]Adversarial92.60%Text classificationEnhanced MADA-IRM by introducing a CTC lossBetter performance than MADA-IRMMore complex than MADA-IRMImproved performance over MADA-IRM
DANN-SA-MMD-IRM-RevGrad-MTL [133]Adversarial92.00%Image classificationEnhanced CDAN-SA by introducing a multi-task learning frameworkBetter performance than CDAN-SAMore complex than CDAN-SAImproved performance over CDAN-SA
DANN-SA-MTL [134]Adversarial91.80%Image classificationEnhanced MADA by introducing a multi-task learning frameworkBetter performance than MADAMore complex than MADAImproved performance over MADA
DANN-SA-RevGrad [135]Adversarial90.90%Image classificationEnhanced DANN-SA by introducing a reversal gradient techniqueBetter performance than DANN-SAMore complex than DANN-SAImproved performance over DANN-SA
MADA [136]Adversarial89.30%Image classificationIntroduced a multi-adversarial discriminator to domain adaptationBetter performance on multiple domainsMore complex than domain adaptationImproved performance on multiple domains
MADA-IRM [137]Adversarial91.20%Image classificationEnhanced MADA by introducing an importance reweighting mechanismBetter performance than MADAMore complex than MADAImproved performance over MADA
MADA-IRM-CTC [138]Adversarial92.50%Text classificationEnhanced CDAN-SA-IRM by introducing a CTC lossBetter performance than CDAN-SA-IRMMore complex than CDAN-SA-IRMImproved performance over CDAN-SA-IRM
MADA-MTL [139]Adversarial91.70%Image classificationEnhanced DANN by introducing a multi-task learning frameworkBetter performance than DANNMore complex than DANNImproved performance over DANN
MADA-RevGrad [140]Adversarial91.30%Image classificationEnhanced MADA by introducing a reversal gradient techniqueBetter performance than MADAMore complex than MADAImproved performance over MADA
MADA-SA-IRM-RevGrad [141]Adversarial91.70%Image classificationEnhanced DANN-SA-IRM-RevGrad by introducing a multi-task learning frameworkBetter performance than DANN-SA-IRM-RevGradMore complex than DANN-SA-IRM-RevGradImproved performance over DANN-SA-IRM-RevGrad
MADA-SA-MMD-IRM-RevGrad-CTC [142]Adversarial92.70%Text classificationEnhanced DANN-SA-MMD-IRM-RevGrad by introducing a CTC lossBetter performance than DANN-SA-MMD-IRM-RevGradMore complex than DANN-SA-MMD-IRM-RevGradImproved performance over DANN-SA-MMD-IRM-RevGrad
MCD [143]Reconstruction89.40%Image classificationIntroduced a maximum mean discrepancy loss to domain adaptationBetter performance than DANNMore complex than DANNImproved performance over DANN
MCD-DA [144]Reconstruction92.50%Image classificationEnhanced MCD by introducing a domain adaptation lossBetter performance than MCDMore complex than MCDImproved performance over MCD
MCD-DA-IRM [145]Reconstruction92.60%Image classificationEnhanced MCD-DA by introducing an importance reweighting mechanismBetter performance than MCD-DAMore complex than MCD-DAImproved performance over MCD-DA
MCD-DA-RevGrad [146]Reconstruction92.70%Image classificationEnhanced MCD-DA by introducing a reversal gradient techniqueBetter performance than MCD-DAMore complex than MCD-DAImproved performance over MCD-DA
MDD [147]Reconstruction88.20%Image classificationIntroduced a multi-domain discriminator to DANNBetter performance on multiple domainsMore complex than DANNImproved performance on multiple domains
MMD-DA [148]Reconstruction91.90%Image classificationEnhanced MMD by introducing a domain adaptation lossBetter performance than MMDMore complex than MMDImproved performance over MMD
MMD-DA-IRM [149]Reconstruction92.00%Image classificationEnhanced MMD-DA by introducing an importance reweighting mechanismBetter performance than MMD-DAMore complex than MMD-DAImproved performance over MMD-DA
MMD-DA-RevGrad [150]Reconstruction92.10%Image classificationEnhanced MMD-DA by introducing a reversal gradient techniqueBetter performance than MMD-DAMore complex than MMD-DAImproved performance over MMD-DA
MUNIT [151]Generative89.70%Image translationIntroduced a multi-modal unsupervised image-to-image translation networkBetter performance than CycleGANMore complex than CycleGANImproved performance over CycleGAN
MUNIT-IRM [152]Adversarial91.90%Image classificationEnhanced MADA-SA by introducing an importance reweighting mechanism, a reversal gradient technique, and a self-attention mechanismBetter performance than MADA-SAMore complex than MADA-SAImproved performance over MADA-SA
MUNIT-SA-IRM [153]Generative92.30%Image translationEnhanced MUNIT by introducing an importance reweighting mechanismBetter performance than MUNITMore complex than MUNITImproved performance over MUNIT
SimGAN [154]Generative89.10%Image classificationIntroduced a generative adversarial network to domain adaptationBetter performance than DANNMore complex than DANNImproved performance over DANN
SimGAN-IRM [155]Generative91.70%Image classificationEnhanced SimGAN by introducing an importance reweighting mechanismBetter performance than SimGANMore complex than SimGANImproved performance over SimGAN
SimGAN-RevGrad [156]Generative91.80%Image classificationEnhanced SimGAN by introducing a reversal gradient techniqueBetter performance than SimGANMore complex than SimGANImproved performance over SimGAN
SimGAN-SA-IRM [157]Adversarial92.10%Image classificationEnhanced CDAN-SA-MMD-IRM-RevGrad by introducing a multi-task learning frameworkBetter performance than CDAN-SA-MMD-IRM-RevGradMore complex than CDAN-SA-MMD-IRM-RevGradImproved performance over CDAN-SA-MMD-IRM-RevGrad
StarGAN [158]Generative90.00%Image translationIntroduced a starGAN for multi-domain image-to-image translationBetter performance than CycleGAN and MUNIT on multiple domainsMore complex than CycleGAN and MUNITImproved performance over CycleGAN and MUNIT on multiple domains
TADAM [159]Reconstruction88.60%Image classificationIntroduced a target adaptation discriminator to DANNBetter performance on small datasetsMore complex than DANNImproved performance on small datasets
UNIT [160]Generative89.90%Image translationIntroduced a unified framework for image-to-image translationBetter performance than CycleGAN and MUNITMore complex than CycleGAN and MUNITImproved performance over CycleGAN and MUNIT
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tanveer, M.H.; Fatima, Z.; Zardari, S.; Guerra-Zubiaga, D. An In-Depth Analysis of Domain Adaptation in Computer and Robotic Vision. Appl. Sci. 2023, 13, 12823. https://doi.org/10.3390/app132312823

AMA Style

Tanveer MH, Fatima Z, Zardari S, Guerra-Zubiaga D. An In-Depth Analysis of Domain Adaptation in Computer and Robotic Vision. Applied Sciences. 2023; 13(23):12823. https://doi.org/10.3390/app132312823

Chicago/Turabian Style

Tanveer, Muhammad Hassan, Zainab Fatima, Shehnila Zardari, and David Guerra-Zubiaga. 2023. "An In-Depth Analysis of Domain Adaptation in Computer and Robotic Vision" Applied Sciences 13, no. 23: 12823. https://doi.org/10.3390/app132312823

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop