Next Article in Journal
Numerical Analysis of Diaphragm Wall Deformation and Surface Settlement Caused by Dewatering and Excavation at Center and End Positions in a Subway Foundation Pit
Previous Article in Journal
Activating and Enhancing the Energy Flexibility Provided by a Pipe-Embedded Building Envelope: A Review
Previous Article in Special Issue
Autonomous Earthwork Machinery for Urban Construction: A Review of Integrated Control, Fleet Coordination, and Safety Assurance
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Human and Multi-Robot Collaboration in Indoor Environments: A Review of Methods and Application Potential for Indoor Construction Sites

1
Department of Big Data, Chungbuk National University, Cheongju 28644, Republic of Korea
2
College of Engineering for Built Environment Research Center, Sungkyunkwan University, Suwon 16419, Republic of Korea
3
Division of Architecture and Urban Design, Incheon National University, Incheon 22012, Republic of Korea
4
School of Civil and Environmental Engineering, Kookmin University, Seoul 02707, Republic of Korea
5
Department of Architecture Engineering, Kwangwoon University, Seoul 01897, Republic of Korea
6
Department of Architectural Engineering, Chungbuk National University, Cheongju 28644, Republic of Korea
*
Author to whom correspondence should be addressed.
Buildings 2025, 15(15), 2794; https://doi.org/10.3390/buildings15152794
Submission received: 2 July 2025 / Revised: 2 August 2025 / Accepted: 5 August 2025 / Published: 7 August 2025
(This article belongs to the Special Issue Automation and Robotics in Building Design and Construction)

Abstract

The integration of robotic agents into complex indoor construction environments is increasing, particularly through human–robot collaboration (HRC) and multi-robot collaboration (MRC). These collaborative frameworks hold great potential to enhance productivity and safety. However, indoor construction environments present unique challenges, such as dynamic layouts, constrained spaces, and variable lighting conditions, which complicate the safe and effective deployment of collaborative robot teams. Existing studies have primarily addressed various HRC and MRC challenges in manufacturing, logistics, and outdoor construction, with limited attention given to indoor construction settings. To this end, this review presents a comprehensive analysis of human–robot and multi-robot collaboration methods within various indoor domains and critically evaluates the potential of adopting these methods for indoor construction. This review presents three key contributions: (1) it provides a structured evaluation of current human–robot interaction techniques and safety-enhancing methods; (2) it presents a summary of state-of-the-art multi-robot collaboration frameworks, including task allocation, mapping, and coordination; and (3) it identifies major limitations in current systems and provides research directions for enabling scalable, robust, and context-aware collaboration in indoor construction. By bridging the gap between current robotic collaboration methods and the needs of indoor construction, this review lays the foundation for the development of adaptive and optimized collaborative robot deployment frameworks for indoor built environments.

1. Introduction

Robotic agents are being deployed across various industrial domains, including manufacturing and construction [1]. This is due to the ability of these systems to increase task efficiency, reduce labor dependence, and improve overall productivity [2]. In the construction industry, robots have been applied to a wide range of tasks, including outdoor inspections, progress monitoring, waste sorting, and hazard detection [3,4]. These applications have yielded promising results and demonstrated the significant potential of robots to enhance performance, productivity, and safety on construction sites.
Despite the current feasibility of deploying robot agents for various construction tasks, the increasing complexity of certain construction activities requires a shift from individual robot deployment to collaborative implementation [5]. This collaborative deployment is particularly important in specialized and dynamic environments such as indoor construction sites, where layouts, tasks, and activities are more complex [6]. Consequently, the integration of collaborative team approaches, such as human–robot collaboration (HRC) and multi-robot collaboration (MRC), is becoming essential for addressing the unique challenges of indoor construction settings [7].
HRC involves the coordinated execution of tasks between humans and robots, while MRC refers to the collaborative execution of tasks among multiple robotic agents [8,9]. Both HRC and MRC have demonstrated several benefits in structured indoor environments, such as smart manufacturing and modular construction, particularly in terms of productivity, task efficiency, and safety. To further enhance these benefits, several HRC and MRC studies have been conducted with a focus on improving key aspects of these collaborative frameworks. Research on HRC has prioritized improvements in human–robot communication, real-time teleoperation, and accident prevention [10,11,12,13]. Concurrently, MRC research has advanced through techniques such as multi-robot task allocation (MRTA), cooperative localization, and collaborative path planning by integrating sensors, such as LiDAR, cameras, and RGB-D devices, with various AI algorithms [14,15,16].
Despite the increasing application of HRC and MRC methods in general outdoor construction and structured indoor environments, the practical application to indoor construction sites remains limited, with few studies conducted. This inadequate deployment is primarily due to the unique operational characteristics of indoor construction, which differ from other indoor domains. Unlike controlled environments, indoor construction sites are extremely dynamic, with layouts changing frequently, cluttered and confined workspaces, inconsistent lighting, varying task setups, and unpredictable worker movements [17]. These factors introduce serious challenges to robot perception, path planning, interaction, and safety, making it difficult to directly transfer existing HRC and MRC approaches into these settings [18]. Consequently, there is a critical need to investigate whether current collaborative robotic frameworks, which have mainly been developed for structured and ideal conditions, can technically be deployed in indoor construction contexts. This review, therefore, seeks to summarize the current state of the art regarding HRC and MRC methods in structured indoor environments and to evaluate the extent to which current methods can be adapted to the unique demands of indoor construction environments. This is important because it enables the identification of current technical limitations, reveals cross-domain transferable methods, and informs the development of more robust and effective domain-specific methods for robot collaborations in complex indoor construction environments. While previous reviews have focused on robot deployment in domains such as outdoor inspections, structural prefabrication, and concrete construction [19,20,21,22], no review has been conducted to systematically examine HRC and MRC methods with a specific emphasis on indoor construction environments.
To this end, this review presents a comprehensive analysis of current HRC and MRC methods developed for indoor environments, with a focus on their potential cross-domain applications to indoor construction. This review seeks to achieve the following objectives:
  • Summarize current methods of human–robot interaction, teleoperation, and accident prevention and identify their potential for application in dynamic indoor construction environments.
  • Identify key techniques for multi-robot task allocation, path planning, localization, and navigation in cluttered and complex indoor environments.
  • Evaluate the limitations of current HRC and MRC methods and identify directions for future research to improve practical deployment in complex indoor construction sites.

2. Research Methodology

This review aims to evaluate current approaches to human–robot and multi-robot collaboration techniques in indoor environments, with emphasis on their applicability in indoor construction. To achieve this, a systematic literature review (SLR) was conducted, followed by a mixed-methods analysis of the selected studies. Papers meeting pre-defined inclusion criteria were analyzed using both quantitative (bibliometric) and qualitative approaches. As noted by Linnenluecke et al. [23], bibliometric analysis employs descriptive statistics to offer a general overview of current literature in a specific domain, whereas qualitative analysis presents a deeper understanding of key themes, content, and context of the selected studies [24]. This review utilized a mixed-methods approach since it presents the advantages of both techniques to provide a more comprehensive and detailed evaluation of the current research trends in HRC and MRC [25].

Article Search and Inclusion Criteria

A systematic search was utilized to identify and retrieve relevant articles on HRC and MRC in indoor environments. Considering the wide distribution of related papers across multiple journals and conferences, two categories of keywords were utilized for literature search, as shown in Table 1. The search was conducted in two major academic databases: (1) Scopus and (2) Web of Science (WoS). These databases were selected due to their comprehensive coverage of peer-reviewed articles and conference papers, as well as their widespread adoption for literature search in systematic reviews within engineering, science, and robotic domains [26,27].
Figure 1 shows the systematic process for the literature search and article inclusion. The article selection process had four main phases, which included: (1) identification, (2) screening, (3) eligibility, and (4) inclusion. In the identification phase, a total of 565 papers were retrieved from Scopus and WoS using the specified keyword combinations. In the second phase, which was the screening step, filters were applied to retain only journal and conference papers written in English. Additionally, the search was limited to papers published between 2017 and 2025 to ensure only recent papers were included. As a result, 32 conference review papers, 8 book chapters, 2 review articles, and 13 non-English papers were excluded. Subsequently, a total of 118 duplicate records were also identified and eliminated using version 7 of Zotero reference manager software. This resulted in 392 unique journal and conference articles eligible for further assessment. In the third phase, which was the eligibility screening step, a 2-stage review process was conducted. Paper titles and abstracts were first screened to confirm the relevance of papers to either HRC or MRC in indoor or complex settings. Then, full-text evaluations were conducted independently by two researchers to identify articles meeting predefined inclusion criteria. Articles were only included if they met the following criteria: (1) studies involving either HRC or MRC, (2) studies that utilized mobile robots, (3) experimental studies in either real or simulated environments, and (4) studies focused on indoor, enclosed, or cluttered environments, such as factories, manufacturing, and indoor service environments. This review considered studies that utilized mobile robotic systems because mobile robots offer more autonomy and adaptability to operate across complex locations [28]. This makes them more applicable for complex HRC and MRC tasks in indoor construction environments compared to fixed robotic arm manipulators, which are mostly stationary, as well as unmanned aerial vehicles (UAVs), which may face unique challenges and spatial constraints in indoor environments [29,30]. Based on the title and abstract evaluation, a total of 265 papers were excluded, with 127 papers undergoing full-text review. This process resulted in a total of 40 studies selected for inclusion.
Recognizing the limited number of studies specifically targeting mobile robot-based MRC and HRC in indoor complex environments, a manual snowballing technique was also employed to identify additional articles to supplement the search. Additional relevant articles were identified based on the references cited in the initially selected papers. This process led to the addition of 36 more studies, including papers presenting generalizable HRC safety frameworks, MRC coordination strategies, and navigation or task allocation methods with a strong potential for indoor deployment. In total, 76 studies were included in this review.

3. Quantitative Results

Figure 2 shows the details of selected studies, categorized by (a) geographical region, (b) year of publication, and (c) type of publication. Three key insights can be noted from this figure.
First, the geographical distribution (Figure 2a) shows that the affiliations of contributing authors span six major regional categories. The majority of studies (30 studies) originated from China, with a substantial number of publications also coming from the United States (13) and the European region (11). This suggests sustained engagement in HRC and MRC research worldwide. In the European region, publications were noted from countries including France, Croatia, Romania, Poland, Italy, and Portugal. Additionally, South Korea and Japan contributed a combined total of eight (8) papers, with 7 papers also originating from India. A few studies were also identified from countries including Sri Lanka, the United Arab Emirates, and Iran, further suggesting that interest in collaborative robotics is geographically widespread. Overall, this global distribution, which highlights publications from almost every continent, reveals a growing interest in collaborative robotics and reflects increasing research attention in this domain.
Second, the year-by-year publication trend shown in Figure 2b illustrates how research in this domain is evolving with time. While the review captured a few publications before 2018, there was a noticeable surge in research output beginning in 2021, indicating growing interest in robot collaboration. The trend remains generally upward, despite a notable dip in 2023. In contrast to 2023, the high number of publications in 2024 and the substantial output within the first five months of 2025, almost equaling that of 2024, suggest that research activity in HRC and MRC continues to increase. This indicates that the topic remains relevant, with more contributions and publications expected.
Third, the distribution by publication type, as shown in Figure 2c, indicates that 58 out of the 76 selected papers were published in peer-reviewed journals, while the remaining were presented at various conferences. The large number of journal publications highlights the mature and scientifically rigorous nature of research in this domain. Current HRC and MRC studies have been published in reputable journals such as Engineering Applications of Artificial Intelligence, IEEE Transactions on Automation Science and Engineering, Sensors, Automation in Construction, and IEEE Robotics and Automation Letters, among others. Conference publications, though fewer in number, reflect the emerging nature of current HRC and MRC methods and evolving ideas to enhance these collaborative frameworks.

4. Qualitative Discussions

Figure 3 shows an overview of key dimensions and methods of HRC and MRC in indoor environments. A comprehensive full-text analysis revealed several critical research dimensions that form the basis for evaluating current HRC and MRC approaches. For HRC, the key dimensions and themes identified include: (1) human–robot interaction, communication, and teleoperation methods; and (2) human–robot accident prevention methods. For MRC, the papers were categorized into (1) multi-robot task allocation and path planning and (2) multi-robot navigation and localization. The subsequent sections present an in-depth discussion of each of these dimensions.

4.1. Human–Robot Collaboration (HRC) for Indoor Environments

The implementation of human-robot teams into indoor environments presents an opportunity for enhanced productivity and safety of workers and robots. However, the complex nature of these environments requires that HRC methods be enhanced and incorporated with techniques to improve human–robot communication and ensure safety. Several studies have been conducted, and several techniques for human–robot interaction and accident prevention have been presented. This section reviews the key methods and techniques proposed in the literature, with a particular emphasis on their applicability and potential for deployment in indoor construction environments.

4.1.1. Human–Robot Interaction, Communication, and Teleoperation Methods in Indoor and Complex Environments

Effective interaction and teleoperation between humans and robots are crucial for successful task execution in HRC [31]. The review of the literature revealed several human–robot interaction methods developed across various domains, including manufacturing, indoor service robots, modular construction, as well as generalized outdoor construction [11,12,13].
Table 2 summarizes the studies that have presented various HRI methods and highlights their applicability to indoor construction. Based on the review of the literature, three key human–robot interaction methods were identified: (1) vision-based gesture control methods, (2) wearable sensor-based gesture control methods, and (3) brain–computer interface (BCI) methods, as illustrated in Figure 4.
Vision-based interaction methods utilize onboard cameras and gesture recognition algorithms to interpret human gestures and translate them into robot commands [32]. These methods, including those proposed by the studies [33,34,35], rely heavily on deep learning algorithms, such as CNNs, YOLOv3, and attention modules, to recognize hand gestures. These systems perform well under ideal conditions, clear lighting, and unobstructed line of sight, with studies reporting accuracies exceeding 90%. Despite the high accuracies achieved, the effectiveness of vision-based interaction methods may be limited in the presence of occlusion, poor lighting, or cluttered environments, which are common in indoor construction environments. Moreover, the need for continuous visual contact between robots and workers limits the use of such methods in narrow and fast-changing indoor construction settings.
Similar to vision-based interaction methods, wearable sensor-based gesture control systems also provide commands to robots through gestures. However, these methods, instead of using robot-embedded cameras, rely on inertial measurement units (IMUs) embedded in wearable devices to capture hand gesture patterns. These patterns are then converted into robotic commands [13,36,37]. These methods, unlike vision-based methods, do not require direct visual gesture inputs and may be more effective in occluded, dim, or cluttered environments. Some of the studies utilizing these methods have also demonstrated high accuracies exceeding 90% [13,38], showing the potential of achieving effective robot control and teleoperation. However, these techniques may also face a unique set of limitations that may affect their effective and practical deployment in indoor construction sites. First, these techniques are invasive and require workers to continuously wear gesture capture devices. This may be challenging and impractical during manual task execution. Second, the devices may also be susceptible to false detection and incorrect gesture commands in dynamic construction tasks, where complex hand movements may trigger unrelated or unintended robot commands.
In contrast to vision and wearable sensor gesture-based systems that rely on gesture inputs, BCI-based methods leverage electroencephalography (EEG) signals and motor imagery (MI) to control robots [11]. In this interaction method, workers mentally simulate actions, which are processed and translated into robotic commands using machine learning and signal processing pipelines [11,12,39,40]. Current methods leverage algorithms such as SVM, LSTM, and random forest, with high accuracies above 80% achieved [11,12,39]. Despite the promising hands-free approach BCI-based methods present for human–robot teleoperation, these methods also face significant practical limitations. First, they require continuous use of EEG devices, which are invasive and may affect the job performance of construction workers. Additionally, the use of motor imagery may impose a high cognitive workload and stress during fast-paced tasks and continuous robot control, thereby compromising the cognitive safety and alertness of workers [41]. Second, current EEG devices are highly susceptible to motion artifacts and may face a high level of signal distortion if deployed in indoor construction environments.
Although the three human–robot interaction methods discussed have achieved promising results, specifically in controlled environments and under ideal conditions, the deployment of these techniques in indoor construction sites in their current form may face several practical limitations. More research is needed to improve these methods.
Table 2. Summary of human–robot interaction methods in other industries and applicability in indoor construction.
Table 2. Summary of human–robot interaction methods in other industries and applicability in indoor construction.
Interaction TypeReferenceIndustryMethods/AlgorithmsKey HardwareFindings and AccuracyEnhancements
Vision-based controlBamani et al. [33]GeneralURGR framework, HQ-Net, Graph ViTDepth and infrared camera embedded on Unitree GO1 robot by Hangzhou Yushu China98.1% (HQ-Net + GViT)Add gesture recognition
Vanamala et al. [42]GeneralConvex hull, CNN, Webcam, microphone, Arduino UNO ItalyEffective basic command recognitionAdd depth sensors to improve noise filtering
Sahoo et al. [34]GeneralDense CNN, Channel attention moduleKinect V2 depth camera by Microsoft93.4% mean accuracyImprove occlusion robustness
Xie et al. [43]GeneralMediaPipe 3D keypoint, GRU pose modelIntel RealSense D435i USA, Unitree Go1 robot from Hangzhou Yushu China.100% classificationExpand gestures, integrate multi-view
Budzan et al. [44]GeneralCNN ResNet50, MobileNetV22D camera by Basler Germany, Camboar PicoToF camera by PMD Germany.RGB: 89.79%, Grayscale: 78.95%Expand the gesture dictionary
Wang & Zhu [35]ConstructionYOLOv3, Deep SORT, ResNet10Zed 2 Stereo camera by Stereo labs USA.91.1% accuracy, 0.14 s response timeAdd dynamic gestures
Gesture-based controlWang et al. [45]GeneralBit-posture mapping 6-DOF haptic controllerPositional tracking < 0.015 mAdaptive filtering
Aggravi et al. [36]GeneralDecentralized connectivity, multi-robot explorationVibrotactile armbands, audio headphones100% haptic recognition, 86% audio feedback accuracyIntegrate gesture-based directional inputs
Stancic et al. [38]GeneralK-means for online motion classificationArduino Mega 2560 by Arduino SRL Italy, Custom inertial sensorsF1-scores of 97.3% for RF, 95.1% for ANN Integrate auto-calibration
Wang et al. [13]ConstructionFully connected networksTap strap 2 IMU wearable sensor by Tap USA 85.7% precision and 93.8% recallIntegrate sensor fusion
Wang et al. [46]ConstructionTwo-stream network (I3D + ResNet)Tap Strap 2 IMU by Tap USA, Tobii Pro Glasses98.8% validation, 92.6% test accuracyReduce latency via edge inference
Yang et al. [47]GeneralFuzzy logic control, human intent estimation Dual 7-DOF manipulator arms, Maxon DC motorsHigh trajectory tracking, Low RMSEAdd vision tracking and conduct an onsite test
BCI/EEG-based controlGhinoui et al. [39]GeneralASTGCN, EEGNetv4, CNN-LSTMOpen BCI EEG cap with (19 electrodes from US), Raspberry PiCNN-LSTM: 88.5%, EEGNetv4: 83.9%Optimize using a few EEG electrodes
Liu et al. [11]ConstructionEEG classification with MLP, adaptive filtering32-channel Emotiv flex EEG headset USA, ROS middleware81.91% accuracy, 3 s latency Reduce latency with edge computing.
Liu et al. [12]ConstructionEEG motor imagery, SVMEmotiv flex EEG headset USA, online signal filtering90% accuracy in real-time MI controlAdd gesture/voice fallback
Yuan et al. [48]GeneralSVM, deep residual net NuAmps EEG headset by Compu medics neuroscan USA, Kinect sensor by Microsoft 67.2–89.3% EEG accuracyAdd automated data filtering
Ultra-Range Gesture Recognition (URGR), High Quality Network (HQ-Net), Vision Transformer (ViT), Convolutional Neural Network (CNN), Attention-Based Spatial-Temporal Relational Graph Convolutional Network (ASRGCN), Electroencephalograph (EEG), Multi-Layer Perceptron (MLP), Support Vector Machine (SVM).

4.1.2. Human–Robot Collision and Accident Prevention Methods in Indoor and Complex Environments

Effective accident prevention is fundamental for safe and efficient HRC, particularly in dynamic and confined indoor construction environments [28]. As collaborative robots are increasingly being integrated into construction workflows, there is a need for reliable, context-aware strategies to enhance the safety of human workers and robots. Several studies have been conducted, and two main methods for human–robot accident prevention have been presented: (1) human intention and trajectory prediction, and (2) collision avoidance.
Table 3 summarizes the key studies on human–robot collision avoidance and human intention and trajectory estimation methods for human–robot safety. The human intention and trajectory estimation methods present a proactive approach to accident prevention. These methods embed robotic agents with the ability to predict human movements and paths, providing robots the capability to predict and avoid human trajectories [49]. These techniques are developed utilizing machine learning algorithms, such as LSTM networks, to model motion patterns and forecast future positions of workers based on historic and real-time data [49,50,51]. Such methods embed robots with a predictive layer of intelligence that allows them to anticipate human actions and adjust their trajectories and movement patterns accordingly. One advantage of these methods is their ability to operate without direct physical sensing of obstacles, which reduces computational demand [50]. The studies utilizing these methods achieved promising accuracies, which further highlight the capability of achieving real-time accident prevention. For example, the study by Liu & Jebelli [50] achieved collision probabilities of less than 5%, while Cai et al. [52] reduced human–robot contact by approximately 23%. Despite these promising results, there are some limitations that need to be addressed in future studies to enhance the potential of these techniques for practical deployment. Current methods are optimized for single-worker and single-robot systems and may be ineffective in fast-paced indoor construction environments characterized by multiple workers. Additionally, current methods have been evaluated mainly in simulated and ideal environments and lack real-time testing in dynamic environments.
In contrast to intention and trajectory estimation methods, several studies have presented more generalized collision prevention methods. These techniques, unlike trajectory estimation methods, directly enable robots to detect and avoid obstacles and humans in real time, thereby reducing the likelihood of physical contact [10]. These systems rely massively on sensor devices, such as lidar, RGB-D cameras, UWB, and IMUs, to comprehensively enable robots to perceive the environment. Current studies have combined these sensor data with algorithms such as Kalman filtering, Bayesian optimization, and probabilistic reachability analysis to assess potential collision zones and generate safe paths [53,54,55,56]. Promising results have been achieved, with some studies reducing collision events by over 80% [10]. Some of the collision prevention studies have also implemented advanced multiscale local perception strategies to account for both nearby and distant obstacles during collaborative tasks [51]. These systems adaptively adjust robot behavior depending on obstacle proximity. While many of these approaches have been validated in simulations, a few have been tested in controlled real-world environments [10,53], demonstrating the feasibility of real-time human–robot cooperation using fused sensory inputs and predictive planning algorithms.
Table 3. Summary of human–robot collision and accident prevention methods in various domains and applicability to indoor construction.
Table 3. Summary of human–robot collision and accident prevention methods in various domains and applicability to indoor construction.
Safety MethodRefApplicationRobot TypeEnvironmentKey Sensor(s)Key Algorithm(s)/FrameworkMain Performance
Worker intention and trajectory estimationCai et al. [57]Construction -Simulated-LSTM, DQN100% success; 23% fewer collisions
Liu & Jebelli [50]Construction -SimulatedRGB camera3D MotNet, Intention NetReduced collision probability to <5%
Cai et al. [52]Construction Mobile robotSimulated-Uncertainty-aware LSTMAverage error of 9.30; final error of 11.21 pixels
Collision avoidancePramanik et al. [55]Construction TurtleBot3 Waffle SimulatedRGB-D camera, LiDARKalman filter, STL, probabilistic reachabilityRMSE of 0.2–0.24 m
Teodorescu et al. [56]Hazardous environmentsMobile robotSimulated and indoor3D LiDAR, depth cameraGPR, Bayesian optimization99.7% collision avoidance confidence
Dang et al. [53]Indoor service Differential drive robotIndoorUWB sensor 1.15 m average distance from the worker
Ghandour et al. [54]Indoor service H20 mobile robotIndoorRGB, infrared depth sensorGesture recognition, CCAI100% effective collision avoidance performance
Mulas-Tejeda et al. [58]Industrial settingsTurtleBot3 Waffle PiIndoor 2D LiDAR, Opti Track MocapLSTM-based velocity prediction98.02% validation accuracy
Li et al. [59]Construction Custom quadruped robotIndoor and outdoor3D LiDAR, UWB, IMUIncremental A* path planning algorithm, UWB localizationAchieved obstacle avoidance with 0.1 m error
Yu et al. [51]Smart factories and logisticsOmnidirectional mobile robotSimulated and indoor2D LiDAR, potentiometer.MLPRA obstacle avoidance100% simulation success
Effective in a real environment
Kim et al. [10]Indoor logistics4WIS mobile robotIndoor and outdoor3D LiDAR, camera, IMU, CAN busKalman filter + Pillar Feature Net83% reduction in obstacle detection time
Che et al. [60]Indoor serviceTurtle botsIndoorRGB-D, ArUcoEKF, social force model91% accuracy in human priority and 83% in robot
Long Short-Term Memory (LSTM), Deep Q-Networks (DQN), Cooperative Collision Avoidance-Based Interaction (CCAI), Multiscale Local Perception Region Approach (MLPRA).
Despite promising results and potential applications of these methods for effective human–robot safety, significant limitations remain. The collision avoidance methods rely massively on ideal sensor readings, which is computationally expensive and may be a limitation in complex construction environments. Additionally, the studies have mainly addressed static obstacles, with limited attention given to dynamic obstacles. There is a need for more research to enhance current methods for more dynamic and complex environments, such as indoor construction sites.

4.2. Multi-Robot Collaboration Methods for Indoor Environments

Multi-robot collaboration (MRC), which involves the deployment of multi-robotic agents working as a team, also holds great potential for enhanced productivity and efficiency in industrial environments [61]. In specialized industrial settings such as indoor construction, where tasks are dynamic, space is constrained, and safety is critical, multi-robot collaboration holds promise for automating activities such as material transport, inspection, and manual task execution. Several studies have been conducted in domains such as manufacturing and general indoor service robotics, and various techniques for enhancing MRC have been presented. A review of MRC studies reveals that current research has focused on two main dimensions, which include: (1) multi-robot task allocation and path planning and (2) collaborative navigation and localization. These dimensions are discussed in detail in the next subsections, and their applicability in indoor construction, as well as enhancement techniques for effective deployment, are presented.

4.2.1. Multi-Robot Task Allocation and Path Planning Methods for Indoor Environments

Multi-robot task allocation and path planning are important aspects of robot deployment in complex indoor environments such as manufacturing and indoor service settings [62]. Effective task allocation and path planning are also relevant for indoor construction sites, which often possess dynamic layouts, limited space, unpredictable obstacles, and varying human activities [63]. However, achieving effective multi-robot task allocation and path planning in these environments requires advanced techniques that can support decentralized control, online learning, and robust multi-robot coordination [62]. Current studies in various indoor domains have presented several methods and techniques for enhanced multi-robot task allocation and path planning in indoor environments. Although these methods have been developed for more structured indoor environments, such as manufacturing, they provide foundational techniques that can be modified and enhanced for more complex indoor environments, such as indoor construction.
Table 4 presents a summary of the multi-robot task allocation and path planning literature. Recent studies aiming to enhance multi-robot task allocation have mainly explored decentralized decision making and dynamic learning-based control, where a group of robots undertake task allocation based on local information and communication with nearby robots [64]. These systems have been implemented using reinforcement learning frameworks such as multi-agent deep deterministic policy gradient (MADDPG), deep Q-networks (DQN), and distributed actor-critic learning [14,65,66]. Some of the studies have also integrated mechanisms such as task priority computation, experience broadcasting, and sparse kernel modeling to enable multi-robot teams to allocate and execute tasks cooperatively while minimizing computational demand [65,66]. In addition to decentralized task allocation methods, some studies have also explored optimization-driven techniques through the integration of algorithms such as hybrid filtered beam search, genetic algorithms, and K-means clustering, as well as some distributed frameworks such as auction-based task allocation [67,68], to achieve effective, cost-efficient, and safe task distribution. The auction-based method presented in [68] also enhances the robustness of task allocation methods by ensuring continuous robot connectivity and information sharing, which are crucial in environments with limited bandwidth and high architectural complexity, such as indoor construction sites.
In the context of multi-robot path planning, studies in various domains have developed and proposed several methods. These methods have been based on several algorithms, ranging from basic deterministic algorithms to advanced hybrid and learning-based systems capable of operating in real time and in obstacle-rich, 3D indoor spaces. Some studies have introduced dual-layer frameworks that combine global path planning algorithms, such as A* and Glasius bio-inspired neural networks, with strategies such as the dynamic window approach and ant colony optimization to achieve robust multi-robot path planning and fine-optimized collision avoidance [69,70]. Other studies have also presented formation control techniques that utilize neural fields, auto-switching potential fields, or game-theoretic collaboration [71,72], as well as congestion-aware techniques that leverage Monte Carlo simulation and probabilistic modeling [73], to allow robots to proactively adapt paths based on predicted dynamic and static obstacles. Most of the studies have achieved promising accuracies, optimal path selection, and reduced robot paths of over 25% [72,73].
Although promising results have been noted from various task allocation and path planning studies, there are some limitations that need to be addressed through future research to achieve full-scale deployment of such methods in indoor construction environments. A common gap across these systems is the limited physical validation in actual indoor environments. Most of the algorithms have been tested in simplified or simulated domains with flat surfaces and ideal sensor conditions. To implement these methods in construction sites, future research should explore path planning, which integrates localization, adaptation to complex layouts, energy-aware path optimization that considers robot consumption, and multi-robot path planning methods that integrate dynamic obstacles such as unpredictable worker movements. These enhancements will be crucial for enabling safe, efficient, and intelligent robot collaboration in indoor construction.
Table 4. Summary of multi-robot task allocation and path planning methods.
Table 4. Summary of multi-robot task allocation and path planning methods.
Focus ReferenceDomainKey AlgorithmTest EnvironmentPerformance
Task allocationShida et al. [65]Material transportMADDPGSimulated100% task allocation success
Dai et al. [14]IndustrialDQNSimulatedUp to 50% reduction in task allocation time
Chakraa et al. [67]InspectionHFBS, GASimulatedAllocation time of 0.527 s for 100 tasks
Miele et al. [68]AgricultureAuction-basedSimulatedHigh accuracy
Thangavelu & Napp [74]ConstructionMARSSimulatedReduction in task allocation time and effective allocation for 9 robots
Zhang et al. [66]-DACLSimulated40% increase in convergence
Aryan et al. [75]-CBSSimulatedEffective task planning
Wang et al [76]General
K-means clustering, pairwise optimizationSimulated and Indoor26.9% improvement in task allocation speed
Li et al. [77]Indoor industrialDQN, MPC, GCNSimulatedUp to 96.82% task allocation accuracy
Path PlanningTeng et al. [70]GeneralGBNN, DWASimulatedUp to 19.74% path reduction
Kuman & Sikander [69]GeneralArtificial bee colony, PRMSimulatedHigher path optimization, with 81% success rate
Fareh et al. [71]GeneralNeural field encoding, PSASimulatedUp to 34% reduction in path length, <5 s execution time
De Castro et al. [78]ConstructionDQN, EKFSimulated96–98% path planning accuracy
Jathunga & Rajapaksha [79]GeneralPRM, GASimulatedReduction in path length, and planning time of 16–71 s for 2–8 robots
Li et al. [73]LogisticsA*, Monte CarloSimulatedReduce robot congestion by 25% leading to optimized paths
Luo et al. [80]Smart factoryF-DQNSimulated86% reduction in convergence time
Matos et al. [81]LogisticsTime-enhanced A*Simulated100% path planning success rate, 48% reduction in computation
Multi-Agent Deep Deterministic Policy Gradient (MADDPG), Deep Q-Network (DQN), Hybrid Filtered Beam Search (HFBS), Genetic Algorithm (GA), Minimal Additive Ramp Structure (MARS), Distributed Actor Critic Learning (DACL), Glasius Bio-Inspired Neural Network (GBNN), Dynamic Window Approach (DWA), Ant Colony Optimization (ACO), Particle Swarm Optimization (PSO), Extended Kalman Filter (EKF), Probabilistic Roadmap (PRM).

4.2.2. Multi-Robot Navigation and Simultaneous Localization Methods for Indoor Environments

Simultaneous navigation and localization methods are important for the effective deployment of multi-robot teams in complex environments. However, the complex nature of indoor construction sites, which is often characterized by GPS-denied zones, dynamic layouts, and regular human interruption, requires robust multi-robot collaboration strategies for navigation and localization. Although very few studies have presented methods for simultaneous navigation and localization in indoor construction, several studies have presented methods for enhanced navigation and localization in general indoor environments and domains such as manufacturing. These methods possess strong potential for enhancement and implementation in indoor construction sites.
Table 5 presents a summary of studies and methods for multi-robot collaborative navigation, localization, and integrated SLAM systems. Current studies have explored a wide range of custom and traditional algorithms that combine path planning, behavior-based coordination, and learning-enabled safety frameworks. Current simultaneous navigation methods have been developed using algorithms such as A*, Dijkstra, Dynamic Window Approach (DWA), and Bug2, together with SLAM or real-time feedback control [82,83,84]. The methods are based on various navigation behaviors, including leader–follower formations where one robot guides the motion and path while other robots maintain specific formation or relative position [85], and a hitchhiking mechanism, where smaller or less powerful robots depend on larger robots for navigation [84]. These frameworks have demonstrated strong performance in simulated and physical lab environments, particularly in achieving smooth trajectory planning, efficient obstacle avoidance, and robust multi-robot coordination [83,85,86]. Current methods have achieved good performance and promising results, with studies achieving significant reductions in multi-robot navigation errors [83], fast navigation planning [82], as well as extended navigation ranges [85]. However, many of these approaches assume structured environmental maps, static environmental conditions, predefined paths, and reliable communication, which deviate from the complex and dynamic conditions of indoor construction sites. Current methods, therefore, require substantial enhancements to make them more suitable for practical deployment in indoor construction environments.
Similar to multi-robot collaborative navigation methods, several studies have also proposed methods for the efficient and simultaneous localization of multiple robot agents. Current localization studies have mainly implemented techniques based on various traditional algorithms, such as the extended Kalman filter (EKF) and particle filter (PF) [87,88,89], as well as more advanced frameworks such as federated filtering, Monte Carlo localization (MCL), and observation-weighted Bayesian methods [88,90,91]. These algorithms have mainly been implemented based on data from various sensors, including odometry, IMU, LiDAR, RGB-D cameras, and UWB. Some studies also leveraged vision-based features, such as ArUco markers or YOLO-detected landmarks [89], while others integrated high-speed data association or two-stage filtering to improve reliability and reduce robot localization drift. Despite the promising performance of current multi-robot navigation and localization methods, several limitations have to be addressed, including the lack of real-world validation and inadequate obstacle avoidance techniques.
In addition to individual collaborative navigation and localization, some studies have also presented some methods for simultaneous localization and navigation (SLAM) of multiple robots. Table 6 shows the related SLAM studies for multi-robot collaboration in indoor domains. SLAM is a critical dimension that enables robotic agents to create maps of the environment while localizing themselves within the developed map [92]. In indoor construction environments, where layouts are often complex, GPS is unavailable, and conditions are constantly changing, SLAM is essential to enhance the effectiveness of multi-robot collaborative frameworks [93]. Studies have presented SLAM systems that have been designed to support collaborative robotics in unstructured indoor environments. Centralized techniques, such as those presented by Liu et al. [94], combine 2D lidar and loop closure filtering to improve map accuracy in multi-robot settings. The system in this study achieved real-time front-end odometry and near-real map fusion with high visual consistency and sharing across several robots. Similarly, Jalil et al. [95] used an optimized LiDAR-based SLAM, which showed accurate 3D mapping, with an RMSE below 1 m in an indoor test, highlighting the suitability of the method for large indoor environments. Additionally, some studies have also presented decentralized SLAM frameworks to address scalability and communication limitations of centralized SLAM methods. The study [96] presented a decentralized loop closure and distributed computing to allow robot teams to map environments with minimal bandwidth usage. The system achieved great accuracy while minimizing computational cost and requirements. Other studies by Xia et al. [97] and Choi et al. [98] have also explored visual SLAM systems. The method proposed by Xia et al. [97] combined point and line features for better performance in environments characterized by weak textures, such as drywall interiors and corridors. Also, the study by Choi et al. [98] presented a vision-based method for real-time robot formation and obstacle avoidance using fisheye cameras, which could assist with autonomous navigation during material transport in indoor spaces. The SLAM methods presented in the studies can be adopted in indoor construction sites to enable teams of robots to build and share accurate maps in real time, supporting coordinated navigation, inspection, and task execution without relying on GPS systems. By utilizing decentralized communication, sensor fusion, and loop closure techniques, they can enhance multi-robot collaboration (MRC) in complex and dynamic environments.
Table 5. Summary of multi-robot navigation and localization methods.
Table 5. Summary of multi-robot navigation and localization methods.
TaskReferenceMethodAlgorithmSensorsTest EnvironmentPerformance
Collaborative navigationRavankar et al. [86]Leader–follower navigationA*, EKFCamera, depth sensor, IMUIndoorEffective navigation with high-accuracy map sharing
Divya Vani et al. [83]Leader–follower navigationDijkstra algorithmOdometer, compass, infraredSimulated28% overall device utilization for navigation
Shankar & Shivakumar [82]Leader–follower navigationEKF, A*Lidar, IMUIndoorRobust navigation with 2.03 s navigation planning time
Chen et al. [99]Decentralized simultaneous navigationVariational Bayesian inference-Simulated93–100% navigation success
Cid et al. [85]Leader–follower navigationDijkstra algorithmLidar, IMU, depth cameraSimulated and indoorExtended navigation range (135–270 m)
M et al. [100]Decentralized simultaneous navigationA*, D*, DWALidar, IMUSimulated and indoorHigh navigation and planning performance
Basha et al. [101]Decentralized simultaneous navigationBug2, finite state machine-IndoorUp to 98.2% improved navigation performance
Collaborative localizationCai et al. [87]Master–slave localizationDKF, EKF, YoloIMU, laser finderIndoor11.3 mm 3D localization RMSE
Zhou et al. [89]Master–slave localizationEKFIMU, camera, odometer, ArUco vision systemIndoor Mean localization error of 0.0028 and 0.0066 m for x & y
Tian et al. [90]-EKFOdometry, cameraSimulatedHigh F1-score
Zhu et al. [91]-MCL, MLELidar, infraredSimulated and IndoorOver 80% improvement in localization recovery
Luo et al. [88]-Particle filterUWBSimulated28.7% reduced localization error
Zahroof et al. [102]Decentralized multi-robot localizationGreedy algorithm, EKFGPSSimulated1–1.5 m localization and tracking error per robot
Lajoie & Beltrame [96]-Graduated non-convexity (GNC)Lidar, RGB-D cameras, IMUIndoorHigh localization accuracy
Matsuda et al. [103]Leader–follower localizationParticle filter, relative depth pose estimation2D Lidar, depth cameraIndoor4–5x improvement compared to other algorithms
Chen et al. [104]Decentralized multi-robot localizationSWOA-Indoor80% localization accuracy
Probabilistic Roadmap Method (PRM), Dynamic Window Approach (DWA), Discrete Kalman Filter (DKF), Extended Kalman Filter (EKF), Monte Carlo Localization (MCL), Maximum Likelihood Estimation (MLE), Standard Whale Optimization Algorithm (SWOA).
Table 6. Multi-robot SLAM methods for indoor and complex environments.
Table 6. Multi-robot SLAM methods for indoor and complex environments.
ReferenceMethodAlgorithm/FrameworkRobot SensorsTest EnvironmentPerformance
Liu et al. [94]Centralized LiDAR SLAM with loop closureHector SLAM, GTSAMCustom mobile robots2D LiDAR, IMUIndoorReal-time odometry, accurate global map
Lajoie & Beltrame [96]Decentralized spectral loop closure (Swarm-SLAM)GNC-based pose graph SLAMBoston Spot, Agilex mobile robotLiDAR, IMU, RGB-D, StereoIndoorHigh accuracy with low bandwidth (95 MB)
Choi et al. [98]Omnidirectional vision-based SLAMOVSLAM + optical flowCustom robotCamera (fisheye)Indoor3–6 cm error, real-time obstacle avoidance
Jalil et al. [95]Centralized LiDAR SLAM with map fusionF-LOAMJackal UGV, Sparkal2D LiDARIndoorRMSE <1 m, fast map merging
Shi et al. [105]Collaborative SLAM via 5G Edge (MEC)Gmapping + AKAZE mergeFesto RobotinoLiDAR, IMUSimulated and indoor10 ms latency, improved map accuracy
Chang et al. [106]Robust centralized SLAM for undergroundPose graph SLAM + GNCBoston Spot, Husky UGVLiDAR, IMU, beaconsTunnelsLess than 2 m error
Liu et al. [107]PF-SLAM with ORB-based fusionParticle Filter SLAMQuanser QBot2RGB-D, gyroscopeIndoor0.002% map fusion error, fast pose estimation
Xia et al. [97]Visual SLAM with point-line fusionORB + LSD + BAAutolabor Pro1, QCarRGB-D, IMUIndoor50% faster map generation, RSME of 0.035 m
Graduated Non-Convexity (GNC), Fast LiDAR Odometry and Mapping (F-LOAM), Particle Filter-based SLAM (PF-SLAM), Georgia Tech Smoothing and Mapping Library (GTSAM), Bundle Adjustment (BA), Line Segment Detector (LSD), Omnidirectional Visual SLAM (OVSLAM), Accelerated-KASE Feature Descriptor (AKAZE).

4.3. Reinforcement Learning-Based Methods for HRC and MRC

Based on the review of various HRC and MRC papers, it is noted that a significant number of HRC and MRC papers relied heavily on reinforcement learning (RL). RL, which is a subset of machine learning, is an emerging domain in AI, where agents such as robots learn optimal behaviors through trial-and-error interactions with the environment [78]. Unlike supervised learning, where labeled data guide the learning process, RL algorithms learn by receiving rewards or penalties based on the outcomes of their actions, thereby enabling the agent to discover effective policies that maximize cumulative rewards over time [108].
The integration of reinforcement learning (RL) into mobile robot operation and control has emerged as a promising dimension for achieving autonomous and adaptive robot behaviors, especially in unstructured and dynamic environments [109,110]. Indoor construction sites are a primary application area for RL-based robot control and management systems [111]. This is because these environments are characterized by constantly changing layouts, narrow passageways, and the presence of static and dynamic obstacles, as well as human activities. Unlike traditional rule-based navigation or static path planning methods, the use of RL enables robots to learn optimal policies through continuous and iterative interaction with the environment [112]. This allows robotic agents to make decisions and adapt to new scenarios that may arise. This ability is particularly beneficial in construction settings where real-time adjustments, safety, and task efficiency are critical. Additionally, RL-based methods also support multimodal sensor integration, such as LiDAR, RGB-D, IMU, and thermal cameras, allowing robots to respond intelligently to a range of sensory inputs [113].
Table 7 summarizes the studies that have implemented various reinforcement learning-based algorithms for robot path planning and accident avoidance. A growing body of research has explored RL-based navigation, path planning, and obstacle detection for mobile robots in indoor environments. Many of the studies have demonstrated a high potential for deploying RL into robotic agents for complex environments such as indoor construction. In the path planning domain, studies have applied various value-based algorithms, such as DQN, double DQN, and other custom improved models, to enable robots to compute optimal paths without the use of site maps [114,115,116]. These studies have adopted frameworks that incorporate adaptive exploration, reward shaping, or hybrid optimization techniques with the aim of generating smooth, collision-free, and optimal paths in cluttered and complex environments while minimizing travel time and computation load [114,117]. These methods are particularly suited for navigating construction environments with narrow pathways, variable object placements, and limited prior mapping. By contrast, the obstacle avoidance studies have focused on providing robots with real-time systems for perceiving their environments under conditions where maps are not available. The studies in this area typically utilized policy-based algorithms, such as proximal policy optimization, soft actor critics, and DDPG, and incorporated sensory inputs from LiDAR, RGB-D, or monocular cameras to interpret the surroundings. Studies [118,119,120] highlighted how multimodal fusion and auxiliary learning tasks, such as velocity estimation, can be combined with RL systems to enhance the ability of robots to identify around obstacles in complex environments. These RL systems were further combined with deep neural networks, such as ConvLSTM, to model complex relationships and were capable of generalizing to diverse environments due to the integration of real-time online learning based on RL agents, making them promising for environments such as indoor construction, where obstacle configurations change rapidly.
Table 7. Summary of reinforcement learning methods for HRC and MRC.
Table 7. Summary of reinforcement learning methods for HRC and MRC.
ApplicationReferenceMethod/FrameworkReinforcement Learning AlgorithmValidation EnvironmentKey Required SensorsResults
Path planningBai et al. [114]MDP + improved DQN DQNSimulated 12% path reduction
Bingol et al. [117]DRL + fuzzy logicPPOSimulatedLidar91% in complex layouts
Tang et al. [121]Causal deconfounding (CD-DRL-MP) + causal modeling DQNSimulated-89.2 to 95.3% planning rate
Takzare et al. [115]DQN + 4-part reward functionDQNSimulatedLidar50% improvement
Jing & Weiya [116]DRL + QPSOCustom DRL with QPSO Simulated-98.2% accuracy
Obstacle avoidanceZhao et al. [122]Hierarchical planning + adaptive controlDDPGSimulated-4.01% deviation
Lu et al. [118]D3QN + ConvLSTM D3QNSimulated and indoorRGB-D, LidarHigh obstacle avoidance
Yu et al. [119]DDPG with zone-aware safety strategyDDPGSimulated-High reward scores
Song et al. [120]Multimodal DRL + bilinear fusionDQNSimulated and indoorKinect, lidar94.4% simulation success
Chen et al. [123]Egocentric local grid mapsDueling DQNSimulated2D laser scannerOutperforms standard DQN
Markov Decision Process (MDP), Deep Q-Network (DQN), Proximal Policy Optimization (PPO), Quantum-Behaved Particle Swarm Optimization (QPSO), Deep Deterministic Policy Gradient (DDPG), Dual Double Deep Q-Network (D3QN).
Despite promising results, RL-based path planning and obstacle avoidance systems may face several practical challenges when transitioning from research to real-world indoor construction applications. Most of the methods have been tested only in simulated environments, which deviate from actual environments. Also, some of the models possess high computational demands, which may limit real-time inference on construction sites. Additionally, many of the current systems are optimized for single-robot operation and not scaled to support collaborative behaviors needed for multi-robot construction tasks. More research is needed to enhance these methods for full-scale deployment in indoor construction.

5. Challenges of Current HRC and MRC Methods and Direction for Future Research

Current studies have provided several promising techniques and methods for improved deployment of HRC and MRC teams in various industrial domains. Despite promising results, high accuracies, and significant enhancement in productivity, there are several challenges that need to be addressed to make these methods fully and practically applicable in complex indoor construction sites. Figure 5 shows directions for future research to enhance HRC and MRC deployment in indoor construction environments. The figure highlights six key categories.
First, current HRC interaction methods are not optimized for dynamic and complex environments. Most methods for human–robot interaction rely on vision-based systems, which require a clear line of sight between robotic agents and human workers. This is a significant limitation in indoor construction environments, which are characterized by narrow spaces and poor lighting. Although current methods have proven reliable and accurate under controlled settings, they may become ineffective in occluded or low-light conditions [33,54]. Additionally, several MRC studies have only been tested in simulated environments, with a few studies evaluating systems in structured indoor environments [89,99,100]. Validating current techniques in real environments is essential as it will present opportunities to identify practical challenges, ensure more adaptive optimization, and provide opportunities to refine proposed methods [71,112,124]. Future research should therefore prioritize on-site validation through field experiments, pilot projects, or test environments that represent actual indoor construction scenarios. Experiments should also be conducted over extended periods to evaluate the adaptability of current HRC and MRC methods over time, robustness under varying operational conditions, and long-term implementation.
Second, the human–robot safety and obstacle avoidance methods are mainly optimized only for ideal environmental conditions and rely on accurate data from specific sensors. Also, current methods for human–robot safety through trajectory planning, intention estimation, and collision prevention rely massively on sensor data such as lidar, RGB-D, and depth cameras [50,53]. This presents several challenges in fast-paced indoor construction environments, which are characterized by occluded views, confined spaces, variable lighting, and constant changes. Some studies have revealed that multimodal sensor fusion enhances robot performance even in unfavorable environments [125,126]. Future research should therefore prioritize the development of robust, multimodal sensor frameworks, especially for human–robot and multi-robot interactions and collision prevention. These systems should integrate complementary sensing systems, which combine sensors such as radio frequency (RF)-based localization, depth sensors, inertial measurement units (IMUs), and other sensor data with traditional vision-based methods. By fusing inputs from multiple sensors, robots can more robustly interpret human intentions as well as identify both dynamic and static obstacles, even in unfavorable environments [127]. Additionally, adaptive sensor fusion algorithms, which are capable of adjusting to environmental conditions such as dynamic lighting levels and obstructions, should be explored, as some studies in other domains have presented similar methods [128,129]. This will ensure real-time adaptability and responsiveness of robotic systems toward varying indoor conditions.
Third, most human–robot interaction and accident prevention methods have been developed mainly for single-worker and single-robot conditions. The vision and gesture-based robot control methods presented have been validated only in the context of single-worker and single-robot systems [11,13,43]. The collision and accident prevention methods have also been validated considering single robots and single worker interactions [51,130]. In fast-paced indoor construction environments where multiple workers and multiple robots may interact, current methods may face limitations. To ensure productive, safe, and effective collaboration in indoor construction sites, HRC systems should be developed with multi-agent and multi-worker coordination techniques. Advanced techniques, such as multi-worker pose estimation, multi-worker identification, and decentralized multi-robot control algorithms, can also be integrated in HRC systems to ensure context-aware coordination among multiple agents and workers [83,96]. Furthermore, collision prevention methods that consider multiple dynamic obstacles, specifically workers, should be developed to enhance safety in complex and high-density workspaces.
Fourth, there is insufficient integration between task allocation, path planning, and localization for multi-robot teams. Many of the MRC frameworks focused on optimizing individual dimensions, such as task allocation or path planning. Studies have not presented multi-robot frameworks that incorporate task allocation, path planning, and navigation simultaneously. Real-world multi-robot systems require comprehensive integration of task planning, navigation, and localization for seamless collaboration [131,132]. In practice, addressing these elements individually may lead to inefficiencies, such as task delays, route congestion, and inadequate resource utilization. For instance, a robot may be assigned an optimal task but lack an efficient path due to poor coordination with localization data or collision avoidance modules. Future research should aim to develop more comprehensive multi-robot collaborative frameworks that combine simultaneous task allocation, path planning, navigation, and localization. An integrated framework that combines all of these dimensions will enable multi-robot systems to be more effective and autonomous. Recent frameworks using joint optimization for task planning and federated SLAM-based localization show promising directions for integration, and these frameworks can be combined into single comprehensive systems [61,75]. This will reduce latency, improve adaptability, and enhance coordination among robots while reducing the limitations caused by utilizing individual independent systems.
Fifth, while several methods have been presented, limited attention has been given to the optimization and reduction of computational requirements for collaborative robot systems. A critical challenge in deploying robotic systems in indoor environments is the need for lightweight and real-time onboard inference [133]. Many of the existing perception and control algorithms demand substantial computational resources, which can hinder their implementation into robot platforms with limited processing power. This limitation is even more significant in indoor construction sites where there may be restricted access to external computational infrastructure and network connectivity [134]. As a result, there is a need for algorithms that are both computationally efficient and capable of delivering real-time performance. Studies have presented approaches such as model pruning, quantization, knowledge distillation, and neuromorphic computing, which can be implemented to reduce the computational requirements of various techniques without compromising accuracy [135,136,137]. Additionally, the development of intelligent task allocation and task-aware resource allocation systems that dynamically allocate tasks based on robot capabilities can help to enhance the task performance and effectiveness of multi-robot teams.
Finally, while significant progress has been made toward enhancing the technical capabilities of HRC and MRC systems, such as perception, navigation, and task coordination, limited attention has been given to user-centered factors and the safety implications of robotic malfunctions. Also, many collaborative systems require workers to interact with robots through gesture-based wearables, visual interfaces, or brain-computer devices [11,13]. Without sufficient training and trust in system usability, these interfaces may impose substantial cognitive and ergonomic burdens, particularly in high-pressure and dynamic environments such as indoor construction sites [138]. Inadequate consideration of these human factors can affect both the effective deployment of these systems and their long-term adoption [139]. On the other hand, robot malfunctions, such as sensor failures, hardware faults, software errors, or communication breakdowns, introduce additional safety hazards [140]. A malfunctioning robot may misinterpret commands, behave unpredictably, or fail to detect human presence, increasing the risk of collision or injury. This risk is even greater in multi-robot systems, where failures in one robot can lead to universal failure across the team, especially when using shared workspaces or coordinated tasks [141]. Despite the severity of these risks, current studies rarely address fault-tolerant control or failure recovery mechanisms adapted to indoor construction. To address these safety concerns, future research should explore realistic and personalized worker training programs using virtual or hybrid setups and also focus on developing resilient robot safety architectures, including real-time fault detection, multimodal sensing redundancy, task-aware risk modeling, and emergency overrides. Such efforts are essential to ensure that HRC and MRC systems remain both usable and safe under dynamic and unpredictable indoor site conditions.

6. Conclusions

This review presents a comprehensive analysis of existing HRC and MRC methods developed for indoor environments and critically evaluates the cross-domain applicability of current methods in indoor construction environments. To achieve this, a systematic literature review was conducted, where 76 articles meeting predefined inclusion criteria were critically evaluated and analyzed to identify current HRC and MRC methods.
This review identified two main dimensions being evaluated to enhance the deployment of HRC teams in complex and cluttered indoor environments. These dimensions include (1) enhancing human–robot interaction and (2) improving human–robot safety. The review revealed three predominant methods (gesture-based, vision-based, and BCI) being used for human–robot communication, as well as two key methods of accident prevention, which include human–robot collision prevention and intention and trajectory prediction. These methods highlight a strong potential for deploying human-robot teams in industrial environments. Similar to HRC, the review revealed significant research efforts to enhance MRC in various complex domains. Current studies have focused on improving simultaneous localization, navigation, path planning, and task allocation of multiple robotic agents to ensure efficiency and enhanced task performance. Current MRC studies have utilized a wide range of traditional, custom, and reinforcement learning methods to achieve multi-robot activities.
Although good results have been achieved in various HRC and MRC studies, current methods may face significant challenges in complex environments such as indoor construction. There is a pressing need to conduct further research and improve current methods. Some recommended dimensions for future studies include: (1) real-world testing and optimization of current HRC and MRC systems; (2) incorporating sensor fusion to make robotic agents more intelligent in complex environments and developing environmentally adaptive techniques; (3) developing integrated multi-worker and multi-robot monitoring frameworks; (4) developing comprehensive frameworks that simultaneously integrate dimensions such as task planning, path planning, and navigation; (5) optimizing systems to reduce computational demands; and (6) incorporating user-centered design, worker training systems, and resilient fault-tolerant safety mechanisms to ensure trust. Addressing these dimensions in future research will strongly enhance the potential of deploying current methods in actual indoor construction environments.
While this review provided several insights regarding HRC and MRC implementation and adoption of current methods in indoor environments, some limitations should be acknowledged to guide future studies. First, this review focused exclusively on mobile robot platforms, thereby excluding other systems, such as static manipulators, UAVs, exoskeletons and other hybrid systems. These robot types, although with some challenges, may also offer valuable insights and additional deployment potential for indoor construction environments. Future studies should evaluate these robot platforms and discuss methods to enhance their effective integration into indoor HRC and MRC teams. Second, this review primarily focused on some technical aspects of HRC, such as collision prevention and human–robot communication protocols, with limited discussions on human-centered dimensions, such as usability, cognitive load, and user trust. These factors are also critical for the practical adoption of HRC and MRC teams and should therefore be systematically examined in future work. Third, this review focused on the deployment potential of technical HRC and MRC methods and did not directly evaluate the integration of specific construction tasks into MRC and HRC frameworks. This is due to the limited availability of task-focused HRC and MRC studies specifically conducted in indoor construction environments. Future reviews should identify and propose methods for integrating construction tasks into HRC and MRC frameworks, preferably by evaluating current methods in other related domains and proposing cross-application methods. Lastly, this review did not comprehensively cover other dimensions, such as simultaneous localization and mapping (SLAM) techniques for multi-robot and human–robot collaboration, as that topic constitutes a substantial research domain in its own right. Future review studies should be conducted to comprehensively discuss SLAM methods for multi-robot and human–robot collaboration, as well as implementation methods for complex indoor construction environments.

Author Contributions

Conceptualization, F.X.D., M.R. and M.-K.K.; methodology, F.X.D. and M.R.; software, F.X.D.; validation, F.X.D., M.-K.K., T.W.K., J.I.K., S.L. (Seulbi Lee) and S.L. (Seulki Lee); formal analysis, F.X.D. and M.R.; investigation, F.X.D., M.R. and M.-K.K.; resources, M.-K.K., T.W.K., J.I.K., S.L. (Seulbi Lee) and S.L. (Seulki Lee); data curation, F.X.D. and M.R.; writing—original draft, F.X.D.; writing—review and editing, F.X.D., M.R. and M.-K.K.; visualization, F.X.D.; supervision, M.-K.K., T.W.K., S.L. (Seulbi Lee) and S.L. (Seulki Lee); funding acquisition, M.-K.K., T.W.K., J.I.K., S.L. (Seulbi Lee) and S.L. (Seulki Lee). All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Korea Agency for Infrastructure Technology Advancement (KAIA) grant funded by the Ministry of Land, Infrastructure, and Transport (Grant No. RS-2024-00512799).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HRCHuman–Robot Collaboration
MRCMulti-Robot Collaboration
URGRUltra-Range Gesture Recognition (URGR)
HQ-NetHigh Quality Network
ViTVision Transformer
CNNConvolutional Neural Network
ASRGCNAttention-Based Spatial-Temporal Relational Graph Convolutional Network
EEGElectroencephalograph
MLPMulti-Layer Perceptron
SVMSupport Vector Machines
LSTMLong Short-Term Memory
DQNDeep Q-Networks
CCAICooperative Collision Avoidance-Based Interaction
MLPRAMultiscale Local Perception Region Approach
MADDPGMulti-Agent Deep Deterministic Policy Gradient
HFBSHybrid Filtered Beam Search
GAGenetic Algorithm
MARSMinimal Additive Ramp Structure
DACLDistributed Actor Critic Learning
GBNNGlasius Bio-Inspired Neural Network
DWADynamic Window Approach
ACOAnt Colony Optimization
PSOParticle Swarm Optimization
EKFExtended Kalman Filter
PRMProbabilistic Roadmap
DKFDiscrete Kalman Filter
MCLMonte Carlo Localization
MLEMaximum Likelihood Estimation
SWOAStandard Whale Optimization Algorithm
MDPMarkov Decision Process
PPOProximal Policy Optimization
QPSOQuantum-Behaved Particle Swarm Optimization
DDPGDeep Deterministic Policy Gradient
D3QNDual Double Deep Q-Network
GNCGraduated Non-Convexity
F-LOAMFast LiDAR Odometry and Mapping
PF-SLAMParticle Filter-based SLAM (PF-SLAM)
GTSAMGeorgia Tech Smoothing and Mapping Library
BABundle Adjustment
LSDLine Segment Detector
OVSLAMOmnidirectional Visual SLAM
AKASEAccelerated-KASE Feature Descriptor

References

  1. Aghimien, D.O.; Aigbavboa, C.O.; Oke, A.E.; Thwala, W.D. Mapping out Research Focus for Robotics and Automation Research in Construction-Related Studies. J. Eng. Des. Technol. 2019, 18, 1063–1079. [Google Scholar] [CrossRef]
  2. Wei, H.-H.; Zhang, Y.; Sun, X.; Chen, J.; Li, S. Intelligent Robots and Human-Robot Collaboration in the Construction Industry: A Review. J. Intell. Constr. 2023, 1, 1–12. [Google Scholar] [CrossRef]
  3. Chen, X.; Huang, H.; Liu, Y.; Li, J.; Liu, M. Robot for Automatic Waste Sorting on Construction Sites. Autom. Constr. 2022, 141, 104387. [Google Scholar] [CrossRef]
  4. Halder, S.; Afsari, K. Robots in Inspection and Monitoring of Buildings and Infrastructure: A Systematic Review. Appl. Sci. 2023, 13, 2304. [Google Scholar] [CrossRef]
  5. Parascho, S. Construction Robotics: From Automation to Collaboration. Annu. Rev. Control Robot. Auton. Syst. 2023, 6, 183–204. [Google Scholar] [CrossRef]
  6. Follini, C.; Magnago, V.; Freitag, K.; Terzer, M.; Marcher, C.; Riedl, M.; Giusti, A.; Matt, D.T. BIM-Integrated Collaborative Robotics for Application in Building Construction and Maintenance. Robotics 2020, 10, 2. [Google Scholar] [CrossRef]
  7. Braga, R.G.; Tahir, M.O.; Iordanova, I.; St-Onge, D. Robotic Deployment on Construction Sites: Considerations for Safety and Productivity Impact. arXiv 2024. [Google Scholar] [CrossRef]
  8. Sheng, W.; Thobbi, A.; Gu, Y. An Integrated Framework for Human–Robot Collaborative Manipulation. IEEE Trans. Cybern. 2015, 45, 2030–2041. [Google Scholar] [CrossRef]
  9. Bao, C.; Hu, Y.; Yu, Z. Current Study on Multi-Robot Collaborative Vision SLAM. Appl. Comput. Eng. 2024, 35, 80–88. [Google Scholar] [CrossRef]
  10. Kim, S.; Jang, H.; Ha, J.; Lee, D.; Ha, Y.; Song, Y. Time-Interval-Based Collision Detection for 4WIS Mobile Robots in Human-Shared Indoor Environments. Sensors 2025, 25, 890. [Google Scholar] [CrossRef]
  11. Liu, Y.; Habibnezhad, M.; Jebelli, H. Brain-Computer Interface for Hands-Free Teleoperation of Construction Robots. Autom. Constr. 2021, 123, 103523. [Google Scholar] [CrossRef]
  12. Liu, Y.; Habibnezhad, M.; Jebelli, H. Brainwave-Driven Human-Robot Collaboration in Construction. Autom. Constr. 2021, 124, 103556. [Google Scholar] [CrossRef]
  13. Wang, X.; Veeramani, D.; Zhu, Z. Wearable Sensors-Based Hand Gesture Recognition for Human–Robot Collaboration in Construction. IEEE Sens. J. 2023, 23, 495–505. [Google Scholar] [CrossRef]
  14. Dai, Y.; Kim, D.; Lee, K. Development of a Fleet Management System for Multiple Robots’ Task Allocation Using Deep Reinforcement Learning. Processes 2024, 12, 2921. [Google Scholar] [CrossRef]
  15. Sandanika, W.A.H.; Wishvajith, S.H.; Randika, S.; Thennakoon, D.A.; Rajapaksha, S.K.; Jayasinghearachchi, V. ROS-Based Multi-Robot System for Efficient Indoor Exploration Using a Combined Path Planning Technique. J. Robot. Control 2024, 5, 1241–1260. [Google Scholar]
  16. Zeng, L.; Guo, S.; Zhu, M.; Duan, H.; Bai, J. An Improved Trilateral Localization Technique Fusing Extended Kalman Filter for Mobile Construction Robot. Buildings 2024, 14, 1026. [Google Scholar] [CrossRef]
  17. Mo, C.; Cao, J.; Zhang, F.; Ji, X. An Autonomous Spraying Method for Indoor Spraying Robots Based on Visual Assistance. In Proceedings of the International Conference on Pattern Recognition and Image Analysis (PRIA 2024), Nanjing, China, 18–20 October 2024; Shan, M., Lei, T., Eds.; SPIE: Bellingham, WA, USA, 2025; p. 45. [Google Scholar] [CrossRef]
  18. Chen, J.; Kim, P.; Cho, Y.K.; Ueda, J. Object-Sensitive Potential Fields for Mobile Robot Navigation and Mapping in Indoor Environments. In Proceedings of the 2018 15th International Conference on Ubiquitous Robots (UR), Honolulu, HI, USA, 26–30 June 2018; pp. 328–333. [Google Scholar] [CrossRef]
  19. Chea, C.P.; Bai, Y.; Pan, X.; Arashpour, M.; Xie, Y. An Integrated Review of Automation and Robotic Technologies for Structural Prefabrication and Construction. Transp. Saf. Environ. 2020, 2, 81–96. [Google Scholar] [CrossRef]
  20. Samsami, R. A Systematic Review of Automated Construction Inspection and Progress Monitoring (ACIPM): Applications, Challenges, and Future Directions. CivilEng 2024, 5, 265–287. [Google Scholar] [CrossRef]
  21. Xu, Z.; Song, T.; Guo, S.; Peng, J.; Zeng, L.; Zhu, M. Robotics Technologies Aided for 3D Printing in Construction: A Review. Int. J. Adv. Manuf. Technol. 2022, 118, 3559–3574. [Google Scholar] [CrossRef]
  22. Zhang, M.; Xu, R.; Wu, H.; Pan, J.; Luo, X. Human–Robot Collaboration for on-Site Construction. Autom. Constr. 2023, 150, 104812. [Google Scholar] [CrossRef]
  23. Linnenluecke, M.K.; Marrone, M.; Singh, A.K. Conducting Systematic Literature Reviews and Bibliometric Analyses. Aust. J. Manag. 2020, 45, 175–194. [Google Scholar] [CrossRef]
  24. Hallinger, P.; Kovačević, J. A Bibliometric Review of Research on Educational Administration: Science Mapping the Literature, 1960 to 2018. Rev. Educ. Res. 2019, 89, 335–369. [Google Scholar] [CrossRef]
  25. Shaban, I.A.; Eltoukhy, A.E.E.; Zayed, T. Systematic and Scientometric Analyses of Predictors for Modelling Water Pipes Deterioration. Autom. Constr. 2023, 149, 104710. [Google Scholar] [CrossRef]
  26. Fu, Y.; Chen, J.; Lu, W. Human-Robot Collaboration for Modular Construction Manufacturing: Review of Academic Research. Autom. Constr. 2024, 158, 105196. [Google Scholar] [CrossRef]
  27. Liang, C.-J.; Wang, X.; Kamat, V.R.; Menassa, C.C. Human–Robot Collaboration in Construction: Classification and Research Trends. J. Constr. Eng. Manag. 2021, 147, 03121006. [Google Scholar] [CrossRef]
  28. Oyediran, H.; Shiraz, A.; Peavy, M.; Merino, L.; Kim, K. Human-Aware Safe Robot Control and Monitoring System for Operations in Congested Indoor Construction Environment. In Proceedings of the Construction Research Congress 2024, Des Moines, IA, USA, 20–23 March 2024; pp. 806–815. [Google Scholar] [CrossRef]
  29. Gautam, S.; Shah, S.; Kurumbanshi, S. Revolutionizing Robotics: A Scalable and Versatile Mobile Robotic Arm for Modrn Applications. In Proceedings of the 2023 IEEE Pune Section International Conference (PuneCon), Pune, India, 14 December 2023; pp. 1–7. [Google Scholar] [CrossRef]
  30. Tang, Q.; Niu, Y. Research on Autonomous Obstacle Avoidance for Indoor UAVs Based on Vision and Laser. In Proceedings of the 2024 International Conference on Interactive Intelligent Systems and Techniques (IIST), Bhubaneswar, India, 4 March 2024; pp. 73–80. [Google Scholar] [CrossRef]
  31. Stedman, H.; Kocer, B.B.; van Zalk, N.; Kovac, M.; Pawar, V.M. Evaluating Immersive Teleoperation Interfaces: Coordinating Robot Radiation Monitoring Tasks in Nuclear Facilities. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May 2023; pp. 11972–11978. [Google Scholar] [CrossRef]
  32. Lee, D.; Han, K. Vision-Based Construction Robot for Real-Time Automated Welding with Human-Robot Interaction. Autom. Constr. 2024, 168, 105782. [Google Scholar] [CrossRef]
  33. Bamani, E.; Nissinman, E.; Meir, I.; Koenigsberg, L.; Sintov, A. Ultra-Range Gesture Recognition Using a Web-Camera in Human–Robot Interaction. Eng. Appl. Artif. Intell. 2024, 132, 108443. [Google Scholar] [CrossRef]
  34. Sahoo, J.P.; Sahoo, S.P.; Ari, S.; Patra, S.K. Hand Gesture Recognition Using Densely Connected Deep Residual Network and Channel Attention Module for Mobile Robot Control. IEEE Trans. Instrum. Meas. 2023, 72, 5008011. [Google Scholar] [CrossRef]
  35. Wang, X.; Zhu, Z. Vision–Based Framework for Automatic Interpretation of Construction Workers’ Hand Gestures. Autom. Constr. 2021, 130, 103872. [Google Scholar] [CrossRef]
  36. Aggravi, M.; Sirignano, G.; Giordano, P.R.; Pacchierotti, C. Decentralized Control of a Heterogeneous Human–Robot Team for Exploration and Patrolling. IEEE Trans. Autom. Sci. Eng. 2022, 19, 3109–3125. [Google Scholar] [CrossRef]
  37. Stancin, I.; Cifrek, M.; Jovic, A. A Review of EEG Signal Features and Their Application in Driver Drowsiness Detection Systems. Sensors 2021, 21, 3786. [Google Scholar] [CrossRef]
  38. Stančić, I.; Musić, J.; Grujić, T. Gesture Recognition System for Real-Time Mobile Robot Control Based on Inertial Sensors and Motion Strings. Eng. Appl. Artif. Intell. 2017, 66, 33–48. [Google Scholar] [CrossRef]
  39. Ghinoiu, B.; Vlădăreanu, V.; Travediu, A.-M.; Vlădăreanu, L.; Pop, A.; Feng, Y.; Zamfirescu, A. EEG-Based Mobile Robot Control Using Deep Learning and ROS Integration. Technologies 2024, 12, 261. [Google Scholar] [CrossRef]
  40. Liu, Y.; Habibnezhad, M.; Jebelli, H.; Monga, V. Worker-in-the-Loop Cyber-Physical System for Safe Human-Robot Collaboration in Construction. In Proceedings of the Computing in Civil Engineering 2021, Orlando, FL, USA, 12–14 September 2021; American Society of Civil Engineers: Reston, VA, USA, 2022; pp. 1075–1083. [Google Scholar] [CrossRef]
  41. Keller, M.; Taube, W.; Lauber, B. Task-Dependent Activation of Distinct Fast and Slow(Er) Motor Pathways during Motor Imagery. Brain Stimul. 2018, 11, 782–788. [Google Scholar] [CrossRef]
  42. Vanamala, H.R.; Akash, S.M.; Vinay, A.; Kumar, S.; Rathod, M. Gesture and Voice Controlled Robot for Industrial Applications. In Proceedings of the 2022 International Conference for Advancement in Technology (ICONAT), Goa, India, 21 January 2022; pp. 1–8. [Google Scholar] [CrossRef]
  43. Xie, J.; Xu, Z.; Zeng, J.; Gao, Y.; Hashimoto, K. Human–Robot Interaction Using Dynamic Hand Gesture for Teleoperation of Quadruped Robots with a Robotic Arm. Electronics 2025, 14, 860. [Google Scholar] [CrossRef]
  44. Budzan, S.; Wyżgolik, R.; Kciuk, M.; Kulik, K.; Masłowski, R.; Ptasiński, W.; Szkurłat, O.; Szwedka, M.; Woźniak, Ł. Using Gesture Recognition for AGV Control: Preliminary Research. Sensors 2023, 23, 3109. [Google Scholar] [CrossRef]
  45. Wang, Z.; Hai, M.; Liu, X.; Pei, Z.; Qian, S.; Wang, D. A Human–Robot Interaction Control Strategy for Teleoperation Robot System under Multi-Scenario Applications. Int. J. Intell. Robot. Appl. 2025, 9, 125–145. [Google Scholar] [CrossRef]
  46. Wang, X.; Veeramani, D.; Dai, F.; Zhu, Z. Context-aware Hand Gesture Interaction for Human–Robot Collaboration in Construction. Comput. Civ. Infrastruct. Eng. 2024, 39, 3489–3504. [Google Scholar] [CrossRef]
  47. Yang, Y.; Li, Z.; Shi, P.; Li, G. Fuzzy-Based Control for Multiple Tasks With Human–Robot Interaction. IEEE Trans. Fuzzy Syst. 2024, 32, 5802–5814. [Google Scholar] [CrossRef]
  48. Yuan, Y.; Li, Z.; Liu, Y. Brain Teleoperation of a Mobile Robot Using Deep Learning Technique. In Proceedings of the 2018 3rd International Conference on Advanced Robotics and Mechatronics (ICARM), Singapore, 18–20 July 2018; pp. 54–59. [Google Scholar] [CrossRef]
  49. Liu, Y.; Jebelli, H. Intention Estimation in Physical Human-Robot Interaction in Construction: Empowering Robots to Gauge Workers’ Posture. In Proceedings of the Construction Research Congress 2022, Arlington, VA, USA, 9–12 March 2022; American Society of Civil Engineers: Reston, VA, USA, 2022; pp. 621–630. [Google Scholar] [CrossRef]
  50. Liu, Y.; Jebelli, H. Intention-aware Robot Motion Planning for Safe Worker–Robot Collaboration. Comput. Civ. Infrastruct. Eng. 2024, 39, 2242–2269. [Google Scholar] [CrossRef]
  51. Yu, X.; Guo, X.; He, W.; Arif Mughal, M.; Zhang, D. Real-Time Trajectory Planning and Obstacle Avoidance for Human–Robot Co-Transporting. IEEE Trans. Autom. Sci. Eng. 2025, 22, 2969–2985. [Google Scholar] [CrossRef]
  52. Cai, J.; Du, A.; Li, S. Prediction-Enabled Collision Risk Estimation for Safe Human-Robot Collaboration on Unstructured and Dynamic Construction Sites. In Proceedings of the Computing in Civil Engineering 2021, Orlando, FL, USA, 12–14 September 2021; American Society of Civil Engineers: Reston, VA, USA, 2022; pp. 34–41. [Google Scholar]
  53. Van Dang, C.; Ahn, H.; Kim, J.-W.; Lee, S.C. Collision-Free Navigation in Human-Following Task Using a Cognitive Robotic System on Differential Drive Vehicles. IEEE Trans. Cogn. Dev. Syst. 2023, 15, 78–87. [Google Scholar] [CrossRef]
  54. Ghandour, M.; Liu, H.; Stoll, N.; Thurow, K. Human Robot Interaction for Hybrid Collision Avoidance System for Indoor Mobile Robots. Adv. Sci. Technol. Eng. Syst. J. 2017, 2, 650–657. [Google Scholar] [CrossRef]
  55. Pramanik, A.; Choi, S.W.; Li, Y.; Nguyen, L.V.; Kim, K.; Tran, H.-D. Perception-Based Runtime Monitoring and Verification for Human-Robot Construction Systems. In Proceedings of the 2024 22nd ACM-IEEE International Symposium on Formal Methods and Models for System Design (MEMOCODE), Raleigh, NC, USA, 3 October 2024; pp. 124–134. [Google Scholar] [CrossRef]
  56. Teodorescu, C.S.; West, A.; Lennox, B. Bayesian Optimization with Embedded Stochastic Functionality for Enhanced Robotic Obstacle Avoidance. Control Eng. Pract. 2025, 154, 106141. [Google Scholar] [CrossRef]
  57. Cai, J.; Du, A.; Liang, X.; Li, S. Prediction-Based Path Planning for Safe and Efficient Human–Robot Collaboration in Construction via Deep Reinforcement Learning. J. Comput. Civ. Eng. 2023, 37, 04022046. [Google Scholar] [CrossRef]
  58. Mulás-Tejeda, E.; Gómez-Espinosa, A.; Escobedo Cabello, J.A.; Cantoral-Ceballos, J.A.; Molina-Leal, A. Implementation of a Long Short-Term Memory Neural Network-Based Algorithm for Dynamic Obstacle Avoidance. Sensors 2024, 24, 3004. [Google Scholar] [CrossRef]
  59. Li, Z.; Li, B.; Liang, Q.; Liu, W.; Hou, L.; Rong, X. A Quadruped Robot Obstacle Avoidance and Personnel Following Strategy Based on Ultra-Wideband and Three-Dimensional Laser Radar. Int. J. Adv. Robot. Syst. 2022, 19, 17298806221114705. [Google Scholar] [CrossRef]
  60. Che, Y.; Sun, C.T.; Okamura, A.M. Avoiding Human-Robot Collisions Using Haptic Communication. In Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018; pp. 5828–5834. [Google Scholar] [CrossRef]
  61. Zhou, C.; Li, J.; Shi, M.; Wu, T. Multi-Robot Path Planning Algorithm for Collaborative Mapping under Communication Constraints. Drones 2024, 8, 493. [Google Scholar] [CrossRef]
  62. Liang, Y.; Zhao, H. An Improved Algorithm of Multi-Robot Task Assignment and Path Planning. In Intelligent Robotics; Springer: Berlin/Heidelberg, Germany, 2023; pp. 71–82. [Google Scholar]
  63. Gopee, M.A.; Prieto, S.A.; García de Soto, B. Improving Autonomous Robotic Navigation Using IFC Files. Constr. Robot. 2023, 7, 235–251. [Google Scholar] [CrossRef]
  64. Lv, Y.; Lei, J.; Yi, P. A Local Information Aggregation-Based Multiagent Reinforcement Learning for Robot Swarm Dynamic Task Allocation. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 10437–10449. [Google Scholar] [CrossRef]
  65. Shida, Y.; Jimbo, T.; Odashima, T.; Matsubara, T. Reinforcement Learning of Multi-Robot Task Allocation for Multi-Object Transportation with Infeasible Tasks. In Proceedings of the 2025 IEEE/SICE International Symposium on System Integration (SII), Munich, Germany, 21–24 January 2025. [Google Scholar] [CrossRef]
  66. Zhang, R.; Ma, Q.; Zhang, X.; Xu, X.; Liu, D. A Distributed Actor-Critic Learning Approach for Affine Formation Control of Multi-Robots With Unknown Dynamics. Int. J. Adapt. Control Signal Process 2025, 39, 803–817. [Google Scholar] [CrossRef]
  67. Chakraa, H.; Leclercq, E.; Guérin, F.; Lefebvre, D. Integrating Collision Avoidance Strategies into Multi-Robot Task Allocation for Inspection. Trans. Inst. Meas. Control 2025, 47, 1466–1477. [Google Scholar] [CrossRef]
  68. Miele, A.; Lippi, M.; Gasparri, A. A Distributed Framework for Integrated Task Allocation and Safe Coordination in Networked Multi-Robot Systems. IEEE Trans. Autom. Sci. Eng. 2025, 22, 11219–11238. [Google Scholar] [CrossRef]
  69. Kumar, S.; Sikander, A. A Novel Hybrid Framework for Single and Multi-Robot Path Planning in a Complex Industrial Environment. J. Intell. Manuf. 2024, 35, 587–612. [Google Scholar] [CrossRef]
  70. Teng, Y.; Feng, T.; Li, J.; Chen, S.; Tang, X. A Dual-Layer Symmetric Multi-Robot Path Planning System Based on an Improved Neural Network-DWA Algorithm. Symmetry 2025, 17, 85. [Google Scholar] [CrossRef]
  71. Fareh, R.; Baziyad, M.; Rabie, T.F.; Khadraoui, S.; Rahman, M.H. Efficient Path Planning and Formation Control in Multi-Robot Systems: A Neural Fields and Auto-Switching Mechanism Approach. IEEE Access 2025, 13, 8270–8285. [Google Scholar] [CrossRef]
  72. Qiu, H.; Yu, W.; Zhang, G.; Xia, X.; Yao, K. Multi-Robot Collaborative 3D Path Planning Based On Game Theory and Particle Swarm Optimization Hybrid Method. J. Supercomput. 2025, 81, 487. [Google Scholar] [CrossRef]
  73. Li, W.; Ma, Z.; Yu, Y. Proactive Multi-Robot Path Planning via Monte Carlo Congestion Prediction in Intralogistics. IEEE Robot. Autom. Lett. 2025, 10, 4588–4595. [Google Scholar] [CrossRef]
  74. Thangavelu, V.; Napp, N. Design and Simulation of a Multi-Robot Architecture for Large-Scale Construction Projects. In Proceedings of the 2021 International Symposium on Multi-Robot and Multi-Agent Systems (MRS), Cambridge, UK, 4 November 2021; pp. 181–189. [Google Scholar]
  75. Aryan, A.; Modi, M.; Saha, I.; Majumdar, R.; Mohalik, S. Integrated Task and Path Planning for Collaborative Multi-Robot Systems. In Proceedings of the ACM/IEEE 16th International Conference on Cyber-Physical Systems (with CPS-IoT Week 2025), Irvine, CA, USA, 6 May 2025; ACM: New York, NY, USA; pp. 1–12. [Google Scholar] [CrossRef]
  76. Wang, Z.; Lyu, X.; Zhang, J.; Wang, P.; Zhong, Y.; Shi, L. MAC-Planner: A Novel Task Allocation and Path Planning Framework for Multi-Robot Online Coverage Processes. IEEE Robot. Autom. Lett. 2025, 10, 4404–4411. [Google Scholar] [CrossRef]
  77. Li, Z.; Shi, N.; Zhao, L.; Zhang, M. Deep Reinforcement Learning Path Planning and Task Allocation for Multi-Robot Collaboration. Alex. Eng. J. 2024, 109, 408–423. [Google Scholar] [CrossRef]
  78. de Castro, G.G.R.; Santos, T.M.B.; Andrade, F.A.A.; Lima, J.; Haddad, D.B.; de Honório, L.M.; Pinto, M.F. Heterogeneous Multi-Robot Collaboration for Coverage Path Planning in Partially Known Dynamic Environments. Machines 2024, 12, 200. [Google Scholar] [CrossRef]
  79. Jathunga, T.; Rajapaksha, S. Improved Path Planning for Multi-Robot Systems Using a Hybrid Probabilistic Roadmap and Genetic Algorithm Approach. J. Robot. Control 2025, 6, 715–733. [Google Scholar] [CrossRef]
  80. Luo, R.; Ni, W.; Tian, H.; Cheng, J. Federated Deep Reinforcement Learning for RIS-Assisted Indoor Multi-Robot Communication Systems. IEEE Trans. Veh. Technol. 2022, 71, 12321–12326. [Google Scholar] [CrossRef]
  81. Matos, D.M.; Costa, P.; Sobreira, H.; Valente, A.; Lima, J. Efficient Multi-Robot Path Planning in Real Environments: A Centralized Coordination System. Int. J. Intell. Robot. Appl. 2025, 9, 217–244. [Google Scholar] [CrossRef]
  82. Arpitha Shankar, S.I.; Shivakumar, M. Sensor Fusion Based Multiple Robot Navigation in an Indoor Environment. Int. J. Interact. Des. Manuf. 2024, 18, 4841–4852. [Google Scholar] [CrossRef]
  83. Divya Vani, G.; Karumuri, S.R.; Chinnaiah, M.C. Hardware Schemes for Autonomous Navigation of Cooperative-Type Multi-Robot in Indoor Environment. J. Inst. Eng. Ser. B 2022, 103, 449–460. [Google Scholar] [CrossRef]
  84. Ravankar, A.; Ravankar, A.; Kobayashi, Y.; Emaru, T. Symbiotic Navigation in Multi-Robot Systems with Remote Obstacle Knowledge Sharing. Sensors 2017, 17, 1581. [Google Scholar] [CrossRef]
  85. Cid, A.; Vangasse, A.; Campos, S.; Delunardo, M.; Cruz Júnior, G.; Neto, N.; Pimenta, L.; Domingues, J.; Barros, L.; Azpúrua, H.; et al. Wireless Communication-Aware Path Planning and Multiple Robot Navigation Strategies for Assisted Inspections. J. Intell. Robot. Syst. 2024, 110, 88. [Google Scholar] [CrossRef]
  86. Ravankar, A.; Ravankar, A.; Kobayashi, Y.; Emaru, T. Hitchhiking Robots: A Collaborative Approach for Efficient Multi-Robot Navigation in Indoor Environments. Sensors 2017, 17, 1878. [Google Scholar] [CrossRef]
  87. Cai, Z.; Liu, J.; Chi, W.; Zhang, B. A Low-Cost and Robust Multi-Sensor Data Fusion Scheme for Heterogeneous Multi-Robot Cooperative Positioning in Indoor Environments. Remote Sens. 2023, 15, 5584. [Google Scholar] [CrossRef]
  88. Luo, Q.; Yang, K.; Yan, X.; Liu, C. A Multi-Robot Cooperative Localization Method Based On Optimal Weighted Particle Filtering. In Proceedings of the 2022 Global Reliability and Prognostics and Health Management (PHM-Yantai), Yantai, China, 13 October 2022; pp. 1–5. [Google Scholar] [CrossRef]
  89. Zhou, Z.; Tang, W.; Wang, Z.; Wang, L.; Zhang, R. Multi-Robot Real-Time Cooperative Localization Based on High-Speed Feature Detection and Two-Stage Filtering. In Proceedings of the 2021 IEEE International Conference on Real-time Computing and Robotics (RCAR), Xining, China, 15 July 2021; pp. 690–696. [Google Scholar] [CrossRef]
  90. Tian, C.; Hao, N.; He, F. Multi-Robot Cooperative Localization Using Anonymous Relative-Bearing Measurements. In Proceedings of the 2022 41st Chinese Control Conference (CCC), Hefei, China, 25 July 2022; pp. 3162–3167. [Google Scholar] [CrossRef]
  91. Zhu, Z.; Zhu, K.; Zheng, Z.; Chen, S.; Zheng, N. Multi-L: A Novel Multi-Robot Cooperative Localization Method in Indoor Environment. In Proceedings of the 2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC), Macau, China, 8 October 2022; pp. 2436–2443. [Google Scholar] [CrossRef]
  92. Khnissi, K.; Jabeur, C.B.; Seddik, H. Implementation of a New-Optimized ROS-Based SLAM for Mobile Robot. In Proceedings of the 2022 IEEE Information Technologies & Smart Industrial Systems (ITSIS), Paris, France, 15 July 2022; pp. 1–6. [Google Scholar] [CrossRef]
  93. Li, X.; Wang, Z.; Tan, Y.; Zhang, X. A Centralized Cooperative SLAM System for Improving Positioning and Perception Accuracy in GPS-Denied Environments. In Proceedings of the 2023 IEEE International Conference on Unmanned Systems (ICUS), Hefei, China, 13 October 2023; pp. 812–817. [Google Scholar] [CrossRef]
  94. Liu, E.; Li, H.; Li, S.; Cheng, X. Centralized Multi-Robot Collaborative LiDAR SLAM Utilizing Loop Closure Selection. In Proceedings of the 2023 China Automation Congress (CAC), Chongqing, China, 17 November 2023; pp. 463–468. [Google Scholar] [CrossRef]
  95. Ahmed Jalil, B.; Kasim Ibraheem, I. Multi-Robot SLAM Using Fast LiDAR Odometry and Mapping. Designs 2023, 7, 110. [Google Scholar] [CrossRef]
  96. Lajoie, P.-Y.; Beltrame, G. Swarm-SLAM: Sparse Decentralized Collaborative Simultaneous Localization and Mapping Framework for Multi-Robot Systems. IEEE Robot. Autom. Lett. 2024, 9, 475–482. [Google Scholar] [CrossRef]
  97. Xia, Y.; Wu, X.; Ma, T.; Zhu, L.; Cheng, J.; Zhu, J. Multi-Robot Collaborative Mapping with Integrated Point-Line Features for Visual SLAM. Sensors 2024, 24, 5743. [Google Scholar] [CrossRef]
  98. Choi, Y.-W.; Choi, J.-W.; Im, S.-G.; Qian, D.; Lee, S.-G. Multi-Robot Avoidance Control Based on Omni-Directional Visual SLAM with a Fisheye Lens Camera. Int. J. Precis. Eng. Manuf. 2018, 19, 1467–1476. [Google Scholar] [CrossRef]
  99. Chen, L.; Wang, Y.; Miao, Z.; Feng, M.; Zhou, Z.; Wang, H.; Wang, D. Toward Safe Distributed Multi-Robot Navigation Coupled With Variational Bayesian Model. IEEE Trans. Autom. Sci. Eng. 2024, 21, 7583–7598. [Google Scholar] [CrossRef]
  100. Mohammed, S.S.M.; Wahab, N.A.; Mahmud, M.S.A.; Alqaraghuli, H.; Samsuria, E.; Romdlony, M.Z. Efficient Autonomous Navigation in Dynamic Environments: Algorithm Evaluation and Multi-Robot Coordination. In Proceedings of the 2024 IEEE International Conference on Automatic Control and Intelligent Systems (I2CACIS), Shah Alam, Malaysia, 29 June 2024; pp. 433–438. [Google Scholar] [CrossRef]
  101. Basha, M.; Siva Kumar, M.; Chinnaiah, M.C.; Lam, S.-K.; Srikanthan, T.; Janardhan, N.; Hari Krishna, D.; Dubey, S. A Versatile Approach to Polygonal Object Avoidance in Indoor Environments with Hardware Schemes Using an FPGA-Based Multi-Robot. Sensors 2023, 23, 9480. [Google Scholar] [CrossRef]
  102. Zahroof, R.; Liu, J.; Zhou, L.; Kumar, V. Multi-Robot Localization and Target Tracking with Connectivity Maintenance and Collision Avoidance. In Proceedings of the 2023 American Control Conference (ACC), San Diego, CA, USA, 31 May 2023; pp. 1331–1338. [Google Scholar]
  103. Matsuda, T.; Kuroda, Y.; Fukatsu, R.; Karasawa, T.; Takasago, M.; Morishita, K. A Mutual Positioning Relay Method of Multiple Robots for Monitoring Indoor Environments. Int. J. Adv. Robot. Syst. 2022, 19, 172988062211298. [Google Scholar] [CrossRef]
  104. Chen, A.; Zhang, B.; Cai, H.; Wei, L.; Liao, Y.; Zhou, B. Experimental Study on Multi-Robot 3D Source Localization in Indoor Environments with Weak Airflow. E3S Web Conf. 2022, 356, 04008. [Google Scholar] [CrossRef]
  105. Shi, Y.; Hao, C.; Wang, Y.; Liu, D.; Guo, J. Multi-Robot Real-Time Collaborative SLAM System Based on 5G MEC Framework. In Proceedings of the 2024 36th Chinese Control and Decision Conference (CCDC), Xi’an, China, 25 May 2024; pp. 5590–5595. [Google Scholar] [CrossRef]
  106. Chang, Y.; Ebadi, K.; Denniston, C.E.; Ginting, M.F.; Rosinol, A.; Reinke, A.; Palieri, M.; Shi, J.; Chatterjee, A.; Morrell, B.; et al. LAMP 2.0: A Robust Multi-Robot SLAM System for Operation in Challenging Large-Scale Underground Environments. IEEE Robot. Autom. Lett. 2022, 7, 9175–9182. [Google Scholar] [CrossRef]
  107. Liu, W. Slam Algorithm for Multi-Robot Communication in Unknown Environment Based on Particle Filter. J. Ambient Intell. Humaniz. Comput. 2021, 1–9. [Google Scholar] [CrossRef]
  108. Song, X.; Chen, K.; Bi, Z.; Niu, Q.; Liu, J.; Peng, B.; Zhang, S.; Liu, M.; Li, M.; Pan, X. Mastering Reinforcement Learning: Foundations, Algorithms, and Real-World Applications. SSRN 2024. [Google Scholar] [CrossRef]
  109. Ogunsina, M.; Efunniyi, C.P.; Osundare, O.S.; Folorunsho, S.O.; Akwawa, L.A. Reinforcement Learning in Autonomous Navigation: Overcoming Challenges in Dynamic and Unstructured Environments. Eng. Sci. Technol. J. 2024, 5, 2724–2736. [Google Scholar] [CrossRef]
  110. Xi, Y. Research on Autonomous Mobile Robot Navigation Technology Based on Deep Reinforcement Learning. Highlights Sci. Eng. Technol. 2024, 114, 108–113. [Google Scholar] [CrossRef]
  111. Cai, W.; Huang, L.; Zou, Z. An Integrated Approach Combining Virtual Environments and Reinforcement Learning to Train Construction Robots for Conducting Tasks Under Uncertainties. In Proceedings of the Canadian Society of Civil Engineering Annual Conference 2022, Whistler, BC, Canada, 25–28 May 2022; pp. 259–271. [Google Scholar] [CrossRef]
  112. Zhang, Y.; Zeng, J.; Sun, H.; Sun, H.; Hashimoto, K. Dual-Layer Reinforcement Learning for Quadruped Robot Locomotion and Speed Control in Complex Environments. Appl. Sci. 2024, 14, 8697. [Google Scholar] [CrossRef]
  113. Chen, S.-C.; Pamungkas, R.S.; Schmidt, D. The Role of Machine Learning in Improving Robotic Perception and Decision Making. Int. Trans. Artif. Intell. 2024, 3, 32–43. [Google Scholar] [CrossRef]
  114. Bai, Z.; Pang, H.; He, Z.; Zhao, B.; Wang, T. Path Planning of Autonomous Mobile Robot in Comprehensive Unknown Environment Using Deep Reinforcement Learning. IEEE Internet Things J. 2024, 11, 22153–22166. [Google Scholar] [CrossRef]
  115. Takzare, N.; Lademakhi, N.Y.; Korayem, M.H. Path Planning of Mobile Robot Based on Reinforcement Learning to Reach Faster Training. In Proceedings of the 2024 12th RSI International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 17 December 2024; pp. 431–436. [Google Scholar] [CrossRef]
  116. Jing, Y.; Weiya, L. RL-QPSO Net: Deep Reinforcement Learning-Enhanced QPSO for Efficient Mobile Robot Path Planning. Front. Neurorobot. 2025, 18. [Google Scholar] [CrossRef]
  117. Bingol, M.C. A Safe Navigation Algorithm for Differential-Drive Mobile Robots by Using Fuzzy Logic Reward Function-Based Deep Reinforcement Learning. Electronics 2025, 14, 1593. [Google Scholar] [CrossRef]
  118. Lu, Z.; He, L.; Wang, H.; Yuan, L.; Xiao, W.; Liu, Z.; Chen, Y. CMADRL: Cross-Modal Attention Based Deep Reinforcement Learning for Mobile Robot’s Obstacle Avoidance. Meas. Sci. Technol. 2025, 36, 036306. [Google Scholar] [CrossRef]
  119. Yu, Z.; Hou, Y.; Zhang, Q.; Liu, Q. Safety-Guided Deep Reinforcement Learning for Path Planning of Autonomous Mobile Robots. In Proceedings of the 2024 International Joint Conference on Neural Networks (IJCNN), Yokohama, Japan, 30 June 2024; pp. 1–6. [Google Scholar] [CrossRef]
  120. Song, H.; Li, A.; Wang, T.; Wang, M. Multimodal Deep Reinforcement Learning with Auxiliary Task for Obstacle Avoidance of Indoor Mobile Robot. Sensors 2021, 21, 1363. [Google Scholar] [CrossRef]
  121. Tang, W.; Wu, F.; Lin, S.; Ding, Z.; Liu, J.; Liu, Y.; He, J. Causal Deconfounding Deep Reinforcement Learning for Mobile Robot Motion Planning. Knowl. Based Syst. 2024, 303, 112406. [Google Scholar] [CrossRef]
  122. Zhao, H.; Guo, Y.; Li, X.; Liu, Y.; Jin, J. Hierarchical Control Framework for Path Planning of Mobile Robots in Dynamic Environments Through Global Guidance and Reinforcement Learning. IEEE Internet Things J. 2024, 12, 309–333. [Google Scholar] [CrossRef]
  123. Chen, G.; Pan, L.; Chen, Y.; Xu, P.; Wang, Z.; Wu, P.; Ji, J.; Chen, X. Deep Reinforcement Learning of Map-Based Obstacle Avoidance for Mobile Robot Navigation. SN Comput. Sci. 2021, 2, 417. [Google Scholar] [CrossRef]
  124. Mahandule, V.; Patil, H.; Thombare, V.; Wagh, B.; More, M.; Gaykar, A. A Comprehensive Framework for Human-Robot Collaboration in Industrial Environments. Int. J. Adv. Res. Sci. Commun. Technol. 2024, 4, 289–295. [Google Scholar] [CrossRef]
  125. Wang, X. Mobile Robot Environment Perception System Based on Multimodal Sensor Fusion. Appl. Comput. Eng. 2025, 127, 42–49. [Google Scholar] [CrossRef]
  126. Khattak, S.; Nguyen, H.; Mascarich, F.; Dang, T.; Alexis, K. Complementary Multi–Modal Sensor Fusion for Resilient Robot Pose Estimation in Subterranean Environments. In Proceedings of the 2020 International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 1–4 September 2020; pp. 1024–1029. [Google Scholar] [CrossRef]
  127. Nguyen Canh, T.; Son Nguyen, T.; Hoang Quach, C.; HoangVan, X.; Duong Phung, M. Multisensor Data Fusion for Reliable Obstacle Avoidance. In Proceedings of the 2022 11th International Conference on Control, Automation and Information Sciences (ICCAIS), Hanoi, Vietnam, 21 November 2022; pp. 385–390. [Google Scholar] [CrossRef]
  128. Sumalatha, I.; Chaturvedi, P.; Gowtham, R.R.; Patil, S.; Thethi, H.P.; Hameed, A.A. Autonomous Multi-Sensor Fusion Techniques for Environmental Perception in Self-Driving Vehicles. In Proceedings of the 2024 International Conference on Communication, Computer Sciences and Engineering (IC3SE), Gautam Buddha Nagar, India, 9 May 2024; pp. 1146–1151. [Google Scholar] [CrossRef]
  129. Mees, O.; Eitel, A.; Burgard, W. Choosing Smartly: Adaptive Multimodal Fusion for Object Detection in Changing Environments. In Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Daejeon, Republic of Korea, 9–14 October 2016; pp. 151–156. [Google Scholar] [CrossRef]
  130. Xie, S.; Gong, L.; Chen, Z.; Chen, B. Simulation of Real-Time Collision-Free Path Planning Method with Deep Policy Network in Human-Robot Interaction Scenario. In Proceedings of the 2023 International Conference on Advanced Robotics and Mechatronics (ICARM), Sanya, China, 8 July 2023; pp. 360–365. [Google Scholar] [CrossRef]
  131. Zhu, A.; Yang, S.X. A Framework for Coordination and Navigation of Multi-Robot Systems. In Proceedings of the 2010 IEEE International Conference on Automation and Logistics, Hong Kong, China, 16–20 August 2010; pp. 350–355. [Google Scholar] [CrossRef]
  132. Chen, Y.; Rosolia, U.; Ames, A.D. Decentralized Task and Path Planning for Multi-Robot Systems. IEEE Robot. Autom. Lett. 2021, 6, 4337–4344. [Google Scholar] [CrossRef]
  133. Xu, Z.; Zhan, X.; Xiu, Y.; Suzuki, C.; Shimada, K. Onboard Dynamic-Object Detection and Tracking for Autonomous Robot Navigation with RGB-D Camera. IEEE Robot. Autom. Lett. 2023, 9, 651–658. [Google Scholar] [CrossRef]
  134. Costin, A.; McNair, J. IoT and Edge Computing in the Construction Site. In Buildings and Semantics; CRC Press: London, UK, 2022; pp. 223–237. [Google Scholar]
  135. Goel, A.; Tung, C.; Lu, Y.-H.; Thiruvathukal, G.K. A Survey of Methods for Low-Power Deep Learning and Computer Vision. In Proceedings of the 2020 IEEE 6th World Forum on Internet of Things (WF-IoT), New Orleans, LA, USA, 2–16 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
  136. Sudevan, V.; Zayer, F.; Javed, S.; Karki, H.; De Masi, G.; Dias, J. Hybrid-Neuromorphic Approach for Underwater Robotics Applications: A Conceptual Framework. arXiv 2024. [Google Scholar] [CrossRef]
  137. Park, S.; Kim, H.; Jeon, W.; Yang, J.; Jeon, B.; Oh, Y.; Choi, J. Quantization-Aware Imitation-Learning for Resource-Efficient Robotic Control. arXiv 2024. [Google Scholar] [CrossRef]
  138. Hanzal, S.; Tvrda, L.; Harvey, M. An Investigation into Discomfort and Fatigue Related to the Wearing of an EEG Neurofeedback Headset. medrxiv 2023. [Google Scholar] [CrossRef]
  139. Cheng, B.; Fan, C.; Fu, H.; Huang, J.; Chen, H.; Luo, X. Measuring and Computing Cognitive Statuses of Construction Workers Based on Electroencephalogram: A Critical Review. IEEE Trans. Comput. Soc. Syst. 2022, 9, 1644–1659. [Google Scholar] [CrossRef]
  140. Kot, T.; Bajak, J.; Novak, P. Analysis and Prevention of Selected Risks of Remotely and Autonomously Controlled Mobile Robot TeleRescuer. In Proceedings of the 2017 18th International Carpathian Control Conference (ICCC), Sinaia, Romania, 28–31 May 2017; pp. 551–554. [Google Scholar] [CrossRef]
  141. Ambroszkiewicz, S.; Bartyna, W.; Skarzynski, K.; Stepniak, M. Fault Tolerant Automated Task Execution in a Multi-Robot System. In Intelligent Distributed Computing IX; Springer: Berlin/Heidelberg, Germany, 2016; pp. 101–107. [Google Scholar] [CrossRef]
Figure 1. Systematic process for the literature search and article inclusion.
Figure 1. Systematic process for the literature search and article inclusion.
Buildings 15 02794 g001
Figure 2. Details of publications: (a) distribution by region, (b) publications by year, and (c) publication type.
Figure 2. Details of publications: (a) distribution by region, (b) publications by year, and (c) publication type.
Buildings 15 02794 g002
Figure 3. Overview of key dimensions and methods of HRC and MRC in indoor environments.
Figure 3. Overview of key dimensions and methods of HRC and MRC in indoor environments.
Buildings 15 02794 g003
Figure 4. Human–robot interaction and teleoperation strategies.
Figure 4. Human–robot interaction and teleoperation strategies.
Buildings 15 02794 g004
Figure 5. Directions for future research to enhance HRC and MRC deployment in indoor construction environments.
Figure 5. Directions for future research to enhance HRC and MRC deployment in indoor construction environments.
Buildings 15 02794 g005
Table 1. Keywords for database search.
Table 1. Keywords for database search.
Keywords
Category 1“Human-robot collaboration” or “Human-robot interaction” or “Human-robot teaming” or “Worker-robot interaction” or “worker-robot collaboration” or “multi-robot collaboration” or “multi-robot teaming” or “multiple robotic agents”
Category 2“indoor environments” or “indoor manufacturing” or “indoor” or “indoor built environments” or “indoor industrial environments”
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Duorinaah, F.X.; Rajendran, M.; Kim, T.W.; Kim, J.I.; Lee, S.; Lee, S.; Kim, M.-K. Human and Multi-Robot Collaboration in Indoor Environments: A Review of Methods and Application Potential for Indoor Construction Sites. Buildings 2025, 15, 2794. https://doi.org/10.3390/buildings15152794

AMA Style

Duorinaah FX, Rajendran M, Kim TW, Kim JI, Lee S, Lee S, Kim M-K. Human and Multi-Robot Collaboration in Indoor Environments: A Review of Methods and Application Potential for Indoor Construction Sites. Buildings. 2025; 15(15):2794. https://doi.org/10.3390/buildings15152794

Chicago/Turabian Style

Duorinaah, Francis Xavier, Mathanraj Rajendran, Tae Wan Kim, Jung In Kim, Seulbi Lee, Seulki Lee, and Min-Koo Kim. 2025. "Human and Multi-Robot Collaboration in Indoor Environments: A Review of Methods and Application Potential for Indoor Construction Sites" Buildings 15, no. 15: 2794. https://doi.org/10.3390/buildings15152794

APA Style

Duorinaah, F. X., Rajendran, M., Kim, T. W., Kim, J. I., Lee, S., Lee, S., & Kim, M.-K. (2025). Human and Multi-Robot Collaboration in Indoor Environments: A Review of Methods and Application Potential for Indoor Construction Sites. Buildings, 15(15), 2794. https://doi.org/10.3390/buildings15152794

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop