Next Article in Journal
Improved FEM Natural Frequency Calculation for Structural Frames by Local Correction Procedure
Previous Article in Journal
Low-Carbon Emissions and Cost of Frame Structures for Wooden and Concrete Apartment Buildings: Case Study from Finland
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Robot Inclusivity in the Built Environment: A Digital Twin-Assisted Assessment of Design Guideline Compliance

Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore 487372, Singapore
*
Author to whom correspondence should be addressed.
Buildings 2024, 14(5), 1193; https://doi.org/10.3390/buildings14051193
Submission received: 9 March 2024 / Revised: 11 April 2024 / Accepted: 12 April 2024 / Published: 23 April 2024
(This article belongs to the Section Construction Management, and Computers & Digitization)

Abstract

:
Developing guidelines for designing robot-inclusive spaces has been challenging and resource-intensive, primarily relying on physical experiments and observations of robot interactions within the built environment. These conventional methods are often costly, time-consuming, and labour-intensive, demanding manual intervention. To address these limitations, this study explores the potential of using digital twins as a promising solution to offer detailed insights, reducing the dependence on physical experiments for studying robot-built environment interactions.Although the concept of digital twins is popular in many domains, the use of digital twins for this specific problem has not been explored yet. A novel methodology for assessing existing built environment guidelines by incorporating them as an architectural digital twin asset within robot simulation software is proposed in this regard. By analysing the digital interactions between robots and the architectural digital twin assets in simulations, the compatibility of the environment with robots is evaluated, ultimately contributing to enhancing these guidelines to be robot-inclusive. The ultimate goal is to create environments that are not only inclusive but also readily accessible to Autonomous Mobile Robots (AMRs). With this objective, the proposed methodology is tested on robots of different specifications to understand the robots’ interactions with different architectural digital twin environments and obstacles. The digital twin effectively demonstrates the capability of the proposed approach in assessing the robots’ suitability for deployment in the simulated environments. The gained insights contribute to improved comprehension and strengthen the existing design guidelines.

1. Introduction

In today’s rapidly evolving world, robots and humans are increasingly interacting within built environments. Advancements in robotics technology enable robots to participate in diverse tasks, ranging from basic functions to complex roles. This collaborative coexistence is reshaping various activities, leading to a future where human-robot partnerships are common and transformative. Autonomous Mobile Robots (AMRs) offer adaptability, cost-effectiveness, and scalability for various industries. Shipments of AMRs increased significantly, and experts predict further growth in the coming years [1]. However, realising fully autonomous multi-purpose service robots operating in human-related environments remains a distant goal [2]. Spatial limitations in the built environment constrain the performance capabilities of robots. Given the growing presence of AMRs globally, it is crucial to understand their interactions with humans and their operating environments. Research shows that the environment can impact a robot’s usability and effectiveness, beyond navigation challenges [3,4,5]. Studies examine various aspects of human-robot interaction, providing valuable insights for designing robot-inclusive environments [3,4]. The process involves extensive experiments and analyses to identify factors influencing their collaboration. This research aims to explore an innovative approach using simulations that incorporate various virtual robot models within an architectural digital twin. The twin is directly modelled based on the current built environment design guidelines. Our primary objective is to establish the efficacy of these design guidelines in simulated scenarios. This research endeavours to outline a systematic methodology for elevating current design guidelines to encompass standards to be inclusive of robots, achieved through the application of digital twin simulations. The proposed approach can also be applied to test existing structures of their robot friendliness. This approach enhances robot integration and performance in physical environments, improving task efficiency and effectiveness in various settings.
The paper’s main contributions are summarized as follows:
  • A novel systematic methodology that utilises digital twins to establish the efficacy of built environment design guidelines for AMRs.
  • Modelling robot models and environments as digital asset twins and experimental demonstration of the robot’s interaction with the environments.
The subsequent sections of the article are organized as follows: Section 2 provides an overview of relevant literature, discussing methods used to establish robot-inclusive design guidelines, current approaches for creating robot-inclusive environments, and state-of-the-art techniques. Section 3 introduces the proposed methodology, outlining its key steps. Section 4 presents a case study to illustrate the application of the methodology, including the development of an architectural digital twin, robot simulation, and derivation of design guidelines. The results are summarized, and study limitations are highlighted in Section 5. Lastly, Section 6 concludes the article, suggesting potential opportunities for future research.

2. Literature Review

This section covers research methods for establishing robot-inclusive design guidelines, current approaches for creating robot-inclusive environments, and state-of-the-art technologies in constructing architecture digital twins during pre- and post-construction phases.

2.1. Research Methods for Robot Inclusive Design Guidelines

This section includes prior research on spatial challenges for robots, particularly AMRs. These studies contribute to establishing design guidelines and principles for robot inclusivity. Farkas et al. [3] conducted a comprehensive study on mobile robot inhabitation in built environments. They focused on operational reliability, safety, and guidelines for creating robot-friendly spaces. The study compared international standards for the built environment with robotics standards and identified faults caused by unsuitable environments. The authors introduced risk prevention guidelines, emphasizing accessibility requirements, and considering ergonomic and anthropometric factors for a safe setting. They highlighted the need for new design recommendations for collaborative environments and proposed checklists and a design process to promote effective human-robot cooperation in a reliable robot-friendly setting.
Elara et al. [4] studied designing environments for autonomous service robots, using a Roomba for experimentation. They propose four design principles: observability (improving sensors), accessibility (safe navigation and robot reach), activity (robot–human and robot–object interactions), and safety (protecting humans and robots). The experiment confirms that following these principles enhances Roomba’s performance in such environments. Elara et al. [2] aim to promote collaboration between roboticists, architects, and designers to create social spaces that accommodate robots effectively. They propose the Robot Inclusive Spaces (RIS) challenge, focusing on real-world deployment issues. Using a “design for robots” strategy, the RIS challenge provides design principles and evaluates cleaning robots’ performance based on these principles. The study suggests conducting multiple tests to assess performance and recommends a comprehensive multi-test approach for future evaluations. The research encourages the development of inclusive spaces to facilitate successful human-robot interactions.
The paper by Tan et al. [5] emphasizes the importance of considering environments and robots together in intelligent living spaces. They propose a framework with three components: Robot-inclusiveness, Taxonomy, and Design criteria. The main finding stresses the need to incorporate the relationship between robots, tasks, and environments in design for inclusive robot environments. This holistic approach enhances functionality and user experience. The framework offers valuable insights for seamless human-robot interactions in intelligent living environments. The mentioned literature studies use robust physical experiments and reasoning (inductive and deductive) to study robot-space interactions and derive design guidelines for robot-inclusive spaces.
Yeo et al. [6] introduced ’Design for Robot’ (DfR) approach, integrating architectural changes to enhance robot productivity. It employs deductive and inductive methods to yield robot-inclusive design guidelines, promising enhanced efficiency. Farkas et al. [7] proposed Robot Compatible Environments (RCE) model, emphasizing compatibility checklist and integration into Building Information Models (BIM). In [8], the authors explored the design of robot ergonomic environments with the framework for human–robot interaction (HRI), aiming to provide guidelines for designers in creating robot ergonomic spaces.
However, these efforts to define robot-inclusive design guidelines are resource-intensive, primarily relying on physical experiments and observations of robot interactions within the built environment. These conventional methods are often costly, labour-intensive, time-consuming, and lack digitization demanding manual intervention, hindering further analysis. Therefore, it emphasizes the need for professionals to explore innovative solutions, particularly leveraging emerging technologies like simulation software and advanced analytics to streamline the process and foster more robot-inclusive design standards. Digital tools can offer benefits for future research in this field.

2.2. Current Research Approaches for Robot-Inclusive Environments

This section covers various research efforts in developing robot-inclusive environments. These include studying the impact of robots on architecture and human experience, exploring new spatial configurations, creating frameworks for quantifying inclusiveness, and designing spaces using robots. These approaches contribute to creating environments that effectively integrate and interact with robots, promoting enhanced human-robot collaboration and seamless integration of robotic technology in different settings. The Gensler Research Institute’s study “Excuse Me Robot” [9] examines how AMRs influence architecture and human experience. Their goal is to develop guiding principles for designing spaces accommodating AMRs, considering physical, technological, and psychological aspects. These principles enhance human experience and stress universal design for safe and accessible spaces. The study proposes a conversation guide and data collection tool to effectively incorporate AMRs, acknowledging their impact on the built environment and future cities.
The book “Automated Landscapes” [10] explores automation’s impact on spatial configurations and the built environment. It discusses three projects involving human-robot collaboration and AI integration in production lines. The studies show how automation affects workplaces, with examples of robots handling high-volume tasks while humans focus on complex ones, leading to optimized spatial organization and increased productivity. Naraharisetti et al. [11] propose a framework with two indices: Robot-Inclusive Space Index (RSI) and Robot Complexity Index (RCI). RSI measures space inclusiveness for robots, while RCI quantifies robot complexity. The study shows that well-designed Robot-Inclusive Spaces (RIS) enable less complex robots to achieve similar functionality as more complex ones. Creating inclusive spaces is crucial as it significantly impacts robot complexity requirements. This approach optimizes robot functionality in various environments by enhancing space inclusiveness.
Ng et al. [12] introduced an adapted Failure Mode and Effects Analysis (FMEA) for evaluating robot-inclusivity and safety in buildings. The framework identifies potential failures, assesses their effects, and recommends mitigation actions. Case studies on telepresence robots in a university campus support the methodology. The adapted FMEA offers a valuable tool for assessing and managing risks in robot-inclusive environments, promoting safer and more inclusive spaces for service robots.
Muthugala et al. [13] explored the concept of design by robot using a floor-cleaning robot to improve area coverage in workspaces. The robot perceives the workspace using LIDAR readings and generates a metric map. The Workspace Organization Suggester (WOS) optimizes object placement for maximum coverage. Experimental results show significant improvement in the floor cleaning robot’s area coverage performance when following the suggested workspace modifications. This approach is practical and effective in enhancing the robot’s ability to cover a larger area in workspaces.
However, the above-mentioned literature still pose limitations in the aspects of data collection and trial testing. In [12], identification, labelling, and counting of the hazard classes is not automated and is labour intensive. The data collation could be subjective as the user may miss out on certain hazards. Additionally, the hazard classes are limited to specific indoor objects, mainly focusing on indoor environments only. The framework also does not consider different robot designs and footprints which would interact differently within the same environment due to spatial constraints. The framework also does not consider changing environment elements such as different lighting conditions and dynamic obstacles, which are also essential aspects in assessing robot inclusivity.
The paper [14] outlines the integration of urodela robots into vertical green gardens through Robot-Inclusive Modular Green Landscaping, utilizing rail tracks and plant pot arrays. In [15], recommendations are provided for accommodating service robots in environments like hotels and restaurants, considering ownership by both organizations and customers. However, this research only offers a general overview of essential factors in hospitality establishments. In [16], the authors explored adapting gardening and gardens to suit a robot lawn mower, providing a reference for current research approaches to Robot-Inclusive Environments. Additionally, in [17], a passive alert tactile system was proposed to indicate potential hazards in the vicinity to the robot, thereby enhancing robot safety and inclusivity.

2.3. Advanced Technologies in Environmental Modelling

Conventional problem solving involves real-world tests, but this approach has limitations. Real-life testing may not always be feasible and can disrupt space functioning. As an alternative, digital twins are valuable, offering state-of-the-art techniques that do not rely solely on real-life testing.

2.3.1. Digital Twin

Digital twins have become prevalent in AEC -FM industries [18,19]. They serve as models to capture and analyse information about various processes and environments [20,21,22]. They effectively address complex problems and find application in various contexts, including AMRs [23,24].
Creating digital replicas of AMRs and their environments makes virtual testing of control and navigation algorithms possible. Simulations and analyses in the digital twin enable evaluating system behaviour in different scenarios. This technology provides a standardized, customizable solution, eliminating real-life testing constraints and offering a controlled environment to test and analyse AMR system performance [23].

2.3.2. Integration of BIM to ARS

The integration of Building Information Modelling (BIM) with Autonomous Robot Systems (ARS) benefits digital design and construction technologies with robotic automation capabilities. Extensive research is being conducted in this area, including extracting maps from BIM for AMR navigation [25,26], using BIM for robotic applications in construction logistics [27], and creating digital twins for robot navigation [28] and automation [29,30]. Research also focuses on creating robot simulation environments from BIM [27,31,32] and emphasizes the need for an interface to use BIM with ARS [33].

2.3.3. 3D Digital Reconstruction Techniques

When BIM is unavailable, various techniques are used to digitize the space. Some research employs 3D scanning and photogrammetry for reconstruction, creating digital twins of buildings [34,35,36,37]. Other techniques involve augmented reality, machine learning, and image-based 3D reconstructions [38,39,40]. Applications like planner5D, Magiscan, Polycam, iRhino 3D, and ARkit-Roomplan enable real-time 3D reconstructions with parametric representation.

2.3.4. 3D Object Detections

Lidar sensors and RGB-D cameras are commonly used to obtain 3D data [41,42,43], which trains computer vision systems for object recognition in various environments. This technology enhances robot autonomy and self-driving cars. Studies include pointnet++ [44], voteNET [45], and 3DETR [46,47] for 3D object detection. Real-time 3D object detection, exemplified by Media pipe objectron [48,49], is an active area of research.
This study underscores the power of digital twin technology in creating virtual replicas of physical environments for real-time monitoring and simulation analysis. Integrating BIM with ARS enhances a robot’s visualization and decision making, bridging the virtual–physical gap. Additionally, 3D digital reconstruction techniques accurately capture environmental features, enabling detailed analysis and simulation. Utilizing 3D object detection enhances modelling precision by identifying relevant objects. Overall, these advanced technologies offer unprecedented opportunities for environmental modelling to study interactions between robots and the built environment.

3. Proposed Methodology

The methodology proposed for enhancing and expanding design guidelines to accommodate robots involves digital modelling of architectural elements as defined in the standards or digitizing the existing built environment to assess its suitability for robots.
In this study, the architectural digital twin assets are modelled based on the Accessibility Design Guidelines [50], a code established by the Building Construction Authority (BCA) in Singapore. Singapore fosters an inclusive and barrier-free built environment addressing the diverse needs of its inhabitants. The code delineates essential requirements and offers comprehensive guidelines on accessibility and universal design. This code holds global relevance, as it exemplifies high standards in urban planning, reflects universal design principles, and contributes to collaborations in creating inclusive built environments and advancing mobility in urban environments.
The decision to focus on accessibility code as the subject of study stems from the belief that embracing universal design and creating barrier-free environments can accommodate AMRs as users. Furthermore, designing for AMRs can improve their ability to perform monotonous, dangerous, and demeaning tasks, ultimately enhancing the human experience and contributing to a better world. The goal is to analyse the robot friendliness of the environment and prepare for robot deployment by using a digital twin of the environment.
This proposed method establishes a streamlined and effective process for digitising the site, simulating robot-related hazards, and studying how robots interact with the site. This approach enhances the development of design guidelines, ultimately creating safer, more efficient, and more inclusive environments for both robots and humans. The methodology is structured into three phases: documentation, digitization, and design analysis. An overview of the proposed method is given in Figure 1.

3.1. Documentation

The first stage focuses on on-site documentation for robot simulation. Documentation methods and data types differ depending on the architectural phase. Direct data collection from designers through Building Information Modelling (BIM) is ideal early in the design process. Researchers are actively working on methods to make BIM models usable for robots [25,26], but this area is still being researched. In the post-construction phase, digitization is done using laser scanning or photogrammetry techniques as point cloud data (PCD) [34,35,36,37] when BIM data is unavailable. Mobile scans are preferred over stationary scans as they are faster in documentation. The collected PCD data needs processing and training to enable accurate robot simulation.

3.2. Digitization

This step focuses on making the digital model suitable for Gazebo, a robot simulation software. The BIM or the newly constructed 3D models of the test sites are directly imported as Collada (DAE) format into Gazebo for robot simulation. In cases where the test sites are documented as point clouds, they are reconstructed into digital space before being imported into the simulation software. The point clouds noted during the documentation phase undergo the following steps for reconstruction. The techniques are inferred from Florent Poux [51]. The first step in data processing is subsampling, reducing the number of points in the dataset for improved computational efficiency. Techniques like random sampling or voxel-based methods preserve essential characteristics while reducing size [52,53,54]. This is followed by outlier removal, which involves identifying and eliminating points considered as noise or anomalies in the dataset [55,56,57]. Outliers can impact accuracy and reliability, and various algorithms like statistical methods or clustering techniques are used for their detection and removal, leading to improved data quality and more accurate results in analysis and simulations. The next step in the digitization process is point cloud segmentation, using random sample consensus (RANSAC) [58] and density-based spatial clustering of applications with noise (DBSCAN) [59] algorithms. RANSAC fits geometric models to remove outliers, refining the segmentation. DBSCAN groups points based on proximity, categorising them into clusters of varying densities and shapes.
Applying both algorithms results in distinct clusters representing objects or architectural elements within the digitised space, aiding in targeted analysis and visualisation [54,60].
The final step is generating the clusters as 3D meshes and exporting them for robot simulation software. The clustered points can be directly used to generate mesh through different surface reconstruction strategies, such as the ball-pivoting algorithm [61] and Poisson reconstruction [62]. The resulting meshes accurately represent the digitised architectural elements. The meshes can be exported in formats like OBJ or collada DAE for compatibility with robot simulation software, preserving geometry and topology for analysis and visualisation. Additionally, the points can also be used for voxel modelling. The voxel model divides the data into small volumetric elements called voxels [63]. Voxel modelling helps organise and analyse the data locally, enabling surface reconstruction and collision detection operations. Each voxel is assigned attributes based on the points within its volume, facilitating efficient storage and retrieval of information within the voxel grid. After voxelisation, voxel cubes are converted into 3D meshes using surface reconstruction methods and exported.

3.3. Design and Analysis

The final stage involves designing and analysing the digital model. The digitised architectural model is used in the robot simulation software Gazebo (version 11) to test various robot behaviours and interactions within the environment. Virtual scenarios are modelled based on the existing built environment design guidelines to assess robot navigation, path planning, and interaction with architectural elements. The digital model provides a realistic representation for accurately evaluating robot deployment and performance. The developed digital twin facilitates the enhancement of design guidelines. The methodology proposed to enhance design guidelines involves conducting and analysing a series of robot–built environment interactions in simulations. Each simulation focuses on a specific scenario developed based on the guidelines and involves a different type of robot with varying parameters. For instance, one could design a study to examine how robots navigate and interact in smart homes, office spaces or urban environments, ensuring that the experiments adhere to established guidelines. The ultimate goal is to validate and refine existing standards, making them more accessible for seamless robot deployments in diverse digital environments.
These experiments enable the examination of the robot’s interaction in the digital environment. The developed digital model can also assist in training machine learning models for 3D object detection and segmentation algorithms [64,65], enhancing robots’ perception and interaction abilities. This digital twin platform facilitates the assessment and optimisation of robot behaviours and spatial layouts, leading to more efficient and effective robot-inclusive spaces. Leveraging the capabilities of an architectural digital twin enhances the process of developing design guidelines, providing illustrative representations for better understanding among professionals in the design and architecture fields.

4. Results and Discussion

From the proposed methodology, digital twin simulations have the ability to provide visualisations on how a robot interacts with the digital representation of the built environment. The simulations provide insights onto assessing the ability of the robot to maneuver safely without collision and accomplish their goal within the stipulated environment. From the process, analysis on the different parameters that may limit and hinder the performance of the robot to result in failure can be conducted to provide recommendations on future design guidelines.

4.1. Case Study for Validation

One of the most essential applications of robotics is in the domain of cleaning. Cleaning is a vital aspect of ensuring good hygiene within an environment for the population [66,67]. Current cleaning procedures and services have adopted cleaning robots to streamline and automate the cleaning procedures. AMRs are also employed to audit the cleanliness of the space. For instance, Pey et al. [68] showcased an AMR for microbial cleanliness audits, suitable for residential and food processing plants where high cleanliness standards are essential. However, for these processes to be effective, the robots must be capable of navigating safely towards the desired waypoints for cleaning [69]. The increased usage and integration of autonomous cleaning-related robots in different indoor and outdoor built environments have highlighted the demand for automated cleaning processes. Therefore, the proposed method using digital twins will focus on the application of cleaning as a use case, specifically by validating the accessibility of cleaning robots within public spaces.

4.2. Digital Modelling

4 models of cleaning robots were chosen, shown in Figure 2, with their specifications documented in Table 1. The models include a large-sized cleaning robot (CleanerBot1 [70] ), 2 medium-sized cleaning robots,(CleanerBot2 [71] and CleanerBot3 [72]), and a small-sized domestic robot (iRobot-Create [73]). The models were chosen to provide robot diversity based on their width, length, and height.
Six distinct environments, each equipped with corresponding facilities, were modelled in adherence to the Accessibility Design Guidelines [50]. These guidelines served as the benchmark for evaluating and augmenting the concept of barrier-free design. The study investigates three different modelling techniques: newly developed 3D modelling, voxel modelling, and Poisson Surface reconstruction, with the latter two involving the reconstruction of models from captured point cloud data. The research aims to identify the most suitable method for generating architectural assets for robot simulations within these environments. The environment set consists of a kerb ramp, walkway ramp, 6-seat and 3-seat tables, a Wheelchair-Friendly (WF) table, and a residential shared corridor area, which is displayed in Figure 3.
For the kerb ramp and walkway ramp environments, a successful deployment would require the robots to navigate from the base of the ramp to the top of the ramp without getting stuck due to the width or gradient of the ramp. This is to simulate the ability for the robots to reach different waypoints in an actual cleaning deployment. For the 6-seat, 3-seat and WF tables, the metric for successful deployment would require the robot to have the capacity to access the space below the table without hitting either the table or chairs. This is to ensure that the robots are capable of cleaning not only the areas surrounding the table but also below the table itself during the deployment. Lastly, for the residential environment, the assessment will be based on the robot’s ability to make turns at corners while navigating through doors and corridors safely without collision. The summarized details of the environments, as well as the metrics of determining successful deployment for the robot within each environment, are collated in Table 2.

4.3. Evaluating Modelling Methods

As depicted in Figure 4, three distinct modelling techniques were employed to create architectural assets and determine the most suitable approach for interaction studies. Notably, the analysis revealed that although time-intensive, constructing a newly designed model proves superior for simulation purposes. In contrast, reconstruction from point cloud data is a less time-consuming alternative, but it still requires data processing for effective simulation. Reconstruction techniques offer the advantage of capturing realistic data, including texture and material values, which opens avenues for further exploration in robot simulation. It is crucial to emphasize that proficient data collection from point clouds necessitates a trained user to ensure realistic capture; any lapses in data collection can lead to model failure and study limitations.
From Figure 5, while representing volumetric attributes of elements, the voxel model sacrifices geometric information vital for in-depth analysis. Notably, smooth edges are transformed into sharp ones. In the case of Poisson surface reconstruction techniques, the documented point clouds must be complete and accurate; incomplete data invariably results in model failure. Thus, meticulous and comprehensive documentation is essential for successful results, as even seemingly flat surfaces can be translated into undulated ones during the meshing process. The modelling methods are flexible and can be tailored to the software used for studying the robot-built environment interactions. The identified limitations can be enhanced by experts facilitating smoother analysis processes in the future.

4.4. Evaluating Design Standards

From the simulations, the different specifications of each robot have enabled the robots to interact differently with the built environment. The results of the simulations provided insights into how the environment or robot can be modified in order to achieve a successful deployment. While all robots are used for cleaning purposes, the simulations can filter which are the most suitable robots based on the nature of the built environment.

4.4.1. Kerb and Walkway Ramps

In the kerb ramp experiment, from Figure 6, despite the narrow width of the entry point of the ramp, the CleanerBot1, CleanerBot2, and CleanerBot3 are able to scale and successfully reach the top of the ramp. However, for the iRobot-Create, the gradient of the ramp is too steep which disallows the robot from reaching the top of the ramp. Additionally, the presence of the tactile markers on the ramp makes it difficult for the robot to overcome this obstacle due to the low clearance between the bottom of the robot and the markers.
In the walkway ramp environment, Figure 7 displayed that while CleanerBot1 was able to clear the first half of the ramp, the tight width of the turning section caused the robot to be stuck. For iRobot-Create, the robot managed to clear the markers at the base of the ramp but was stuck at the first half of the ramp due to the steep gradient. In contrast, for CleanerBot2 and CleanerBot3, both robots were able to reach the top of the ramp without getting stuck due to the width or gradient of the ramp.
From the kerb and walkway ramp simulations, it indicated that robot models similar to the specifications of iRobot-Create should not be deployed for both types of ramps as the robot will have a high tendency of getting stuck, which would require additional manpower and effort to remove the robot and redeploy it. Similarly, for CleanerBot1, the robot should not be deployed in walkway ramps of that specific width. Instead, using alternative routes such as lifts, should be used as opposed to ramps to enable the robot to navigate effectively towards the desired waypoint.

4.4.2. 6-Seat and 3-Seat Table

In the fixed 6-seat table simulation, only the iRobot-Create is able to access the space beneath the table and chairs effectively as documented in Figure 8. The smaller size enables the robot to be able to pass through the narrow width between chairs as well as not be obstructed by the height of both the tables and chairs.
For the fixed 3-seat table environment, displayed in Figure 9, despite the removal of half the chairs, resulting in lesser obstructions and obstacles, only the iRobot-Create is able to still successfully access the underside table space. While the width of the space is sufficient for the larger cleaning robots to cover the areas where the removed chairs used to be situated, the height of the robots does not enable the accessibility of the underside table space. Hence, for cleaning deployment similar to these specific scenarios, smaller robots should be used instead to ensure effective cleaning and coverage.

4.4.3. Wheelchair-Friendly Table

Unlike the previous simulations that consisted of both tables and chairs, the wheelchair-friendly table as displayed in Figure 10, does not have chairs that would obstruct the entry point of cleaning robots. However, despite this, the height clearance only enables iRobot-Create to successfully cover the space below the table, making it the most suitable robot for such cleaning deployments. If other robots were to be used, this could cause damage to the robot’s frame and external sensors, incurring unnecessary repair costs.

4.4.4. Residential Corridor

For the residential shared environment, the scenario aims to visualise if the robots can safely navigate even when humans are present, as shown in Figure 11. Two humans were placed at the bottom entry point of the environment along the passing space and one in the middle of the corridor.
From Figure 12, the CleanerBot1 is unable to successfully navigate the passing space due to human obstruction, which could result in a collision leading to injuries to the human and damage to the robot. Along the corridor, only the iRobot-Create can navigate successfully while the CleanerBot2 and CleanerBot3 are stuck due to width constraints.
From Figure 13 and Figure 14, all robots are able to clear the protrusion on the corridor wall without being hindered. However, only the CleanerBot1 was unable to clear the turning operation and failed to pass through the door due to the robot’s large width. For this specific environment, the iRobot-Create is the most suitable candidate for successful deployment for both navigation and cleaning. In contrast, CleanerBot1 will be deemed unfit due to the inability to successfully navigate through environmental obstacles and spaces even without humans. However, the CleanerBot2 and CleanerBot3 can be considered for deployment to cover certain areas of the environment that are free of humans.

5. Discussion

From the digital twin simulations, Table 3 displays a comparison of each robot’s overall performance based on the tested environments. The achieved goal count is calculated as the sum of the total number of environments in which the robot is able to achieve the goal for that particular environment based on the metrics identified in Table 2. The iRobot-Create completes the most goals and performs the best. Thus, the robot would be the most suitable for deployment amongst the tested environments. In contrast, CleanerBot1 achieves the least number of goals amongst the selected environments and would be the least suitable robot for deployment. It’s important to highlight that the degree of robot inclusiveness doesn’t always directly correlate with efficient robot performance. However, an inclusive environment does promote improved reach and accessibility for robots, making it easier for them to carry out their intended tasks without obstacles.
The simulations provide valuable insights into the interactions between robots and built environments, fostering a deeper understanding and refinement of design principles for both robotics and architectural design. Continued exploration through these simulations contributes to enhancing design guidelines in these domains.

6. Conclusions

This research has effectively proposed an architectural digital twin approach that enables the study and comprehensive analysis of the interactions between robots and the built environment to enhance built environment design guidelines. A well-defined methodology has been proposed to study and validate the interaction through simulation. These solutions and the suggested methodology for design guideline generation collectively contribute to enhancing the efficiency and effectiveness of creating robot-inclusive design guidelines.
This research focuses on exploring the potential of digital simulations to enhance the performance of deployed robots in architectural spaces. However, the study is primarily confined to the robot’s interactions in static building infrastructure, neglecting the dynamic elements present in real-world environments. The absence of inhabitants, including humans, pets, and other robots, limits the realism of the study. However, typical cleaning robot deployments (cleaning robots are considered as a case study in this work) can be conducted during non-peak hours [74], in which the static environment results remain relevant for insights on robot ergonomics. It is essential to consider and study the dynamic interactions with different types of inhabitants to improve the applicability and relevance of the guidelines enhanced by the digital simulation. The consideration of dynamic features in the environments for the digital twin is proposed for future work.
There would still exist error discrepancies between the digital twin and the actual environment. However, given that the methodology is modelled in accordance to industry standard and actual robot specifications, the discrepancies are still acceptable. Furthermore, the digital twin concept in this work is utilized to evaluate robot accessibility, which is primarily dependent on geometrical constraints. The discrepancies between simulation results and real-world situations would be minor concerning geometrical constraints. Therefore, the outcomes from the digital twin would still apply to real-world scenarios to a greater extent. Future work may expand the simulation towards higher fidelity simulation software such as PyBullet or IsaacSim.
The scope of the research can be extended to use the digital model to develop a live digital twin that can identify hazards and train the robot’s algorithm for detecting such hazards. The training can include AI models such as belief networks [75] or deep learning-based approaches [76]. The solution also provides a foundation for automating the interaction analysis and can be validated and generate the design guidelines. The proposed methodology is scalable and can be adapted for mapping and digitising spaces to make them robot-inclusive. This research identifies the gap in BIM that is adequate for studying robot–building interaction and proposes a need for modelling protocols.

Author Contributions

Conceptualization, A.E.; methodology, A.E.; software, A.E. and J.J.J.P.; validation, A.E. and J.J.J.P.; formal analysis, A.E. and J.J.J.P.; investigation, A.E. and J.J.J.P.; data curation, A.E. and J.J.J.P.; writing—original draft preparation, A.E. and J.J.J.P.; writing—review and editing, M.A.V.J.M. and M.B.; visualization, A.E. and J.J.J.P.; supervision, M.A.V.J.M., M.B. and M.R.E.; project administration, M.A.V.J.M. and M.B.; funding acquisition, M.R.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the National Robotics Programme under its National Robotics Programme (NRP) BAU, Ermine III: Deployable Reconfigurable Robots, Award No. M22NBK0054 and also supported by A*STAR under its “RIE2025 IAF-PP Advanced ROS2-native Platform Technologies for Cross sectorial Robotics Adoption (M21K1a0104)” programme.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Burnstein, J. 6 Trends Bound to Boost Automation Growth in 2023. Quality 2023, 62, 22. [Google Scholar]
  2. Elara, M.R.; Tan, N.; Tjoelsen, K.; Sosa, R. Designing the robot inclusive space challenge. Digit. Commun. Netw. 2015, 1, 267–274. [Google Scholar]
  3. Farkas, Z.V.; Korondi, P.; Fodor, L. Safety aspects and guidelines for robot compatible environment. In Proceedings of the IECON 2012—38th Annual Conference on IEEE Industrial Electronics Society, Montreal, QC, Canada, 25–28 October 2012; pp. 5547–5552. [Google Scholar]
  4. Elara, M.R.; Rojas, N.; Chua, A. Design principles for robot inclusive spaces: A case study with Roomba. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 5593–5599. [Google Scholar]
  5. Tan, N.; Mohan, R.E.; Watanabe, A. Toward a framework for robot-inclusive environments. Autom. Constr. 2016, 69, 68–78. [Google Scholar] [CrossRef]
  6. Yeo, M.S.K.; Samarakoon, S.M.B.P.; Ng, Q.B.; Ng, Y.J.; Muthugala, M.A.V.J.; Elara, M.R.; Yeong, R.W.W. Robot-Inclusive False Ceiling Design Guidelines. Buildings 2021, 11, 600. [Google Scholar] [CrossRef]
  7. Farkas, Z.; Nádas, G.; Kolossa, J.; Korondi, P. Robot Compatible Environment and Conditions. Period. Polytech. Civ. Eng. 2021, 65, 784–791. [Google Scholar] [CrossRef]
  8. Sandoval, E.B.; Sosa, R.; Montiel, M. Robot-Ergonomics: A Proposal for a Framework in HRI. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 5–8 March 2018; HRI ’18. pp. 233–234. [Google Scholar] [CrossRef]
  9. Howard, A.; Scheidt, N.; Engels, N.; Gonzalez, F.; Shin, D.; Rodriguez, M.; Brown, K. “Excuse Me, Robot...” The Rules of Human-Centric Space in the 21st Century; Technical Report; Gensler Research Institute: London, UK, 2020. [Google Scholar]
  10. Munoz Sanz, V.; Verzier, M.; Kuijpers, M.; Groen, L.; Bedir, M. Automated Landscapes. 2023. Available online: https://www.researchgate.net/publication/368667652_Automated_Landscapes (accessed on 4 September 2023).
  11. Naraharisetti, P.R.; Saliba, M.A.; Fabri, S.G. Towards the Quantification of Robot-inclusiveness of a Space and the Implications on Robot Complexity. In Proceedings of the 2022 8th International Conference on Automation, Robotics and Applications (ICARA), Prague, Czech Republic, 18–20 February 2022; pp. 39–43. [Google Scholar]
  12. Ng, Y.; Yeo, M.S.; Ng, Q.; Budig, M.; Muthugala, M.V.J.; Samarakoon, S.B.P.; Mohan, R. Application of an adapted FMEA framework for robot-inclusivity of built environments. Sci. Rep. 2022, 12, 3408. [Google Scholar] [CrossRef] [PubMed]
  13. Muthugala, M.V.J.; Samarakoon, S.B.P.; Elara, M.R. Design by robot: A human-robot collaborative framework for improving productivity of a floor cleaning robot. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 7444–7450. [Google Scholar]
  14. Yeo, M.; Samarakoon, B.; Ng, Q.; Muthugala, V.; Mohan, R.E. Design of Robot-Inclusive Vertical Green Landscape. Buildings 2021, 11, 203. [Google Scholar] [CrossRef]
  15. Ivanov, S.; Webster, C. Designing robot-friendly hospitality facilities. In Proceedings of the Scientific Conference “Tourism, Innovations, Strategies”, Bourgas, Bulgaria, 13–14 October 2017. [Google Scholar]
  16. Verne, G.B. Adapting to a Robot: Adapting Gardening and the Garden to fit a Robot Lawn Mower. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 34–42. [Google Scholar] [CrossRef]
  17. Yeo, M.S.K.; Pey, J.J.J.; Elara, M.R. Passive Auto-Tactile Heuristic (PATH) Tiles: Novel Robot-Inclusive Tactile Paving Hazard Alert System. Buildings 2023, 13, 2504. [Google Scholar] [CrossRef]
  18. Hosamo, H.H.; Imran, A.; Cardenas-Cartagena, J.; Svennevig, P.R.; Svidt, K.; Nielsen, H.K. A review of the digital twin technology in the AEC-FM industry. Adv. Civ. Eng. 2022, 2022, 2185170. [Google Scholar] [CrossRef]
  19. Deng, M.; Menassa, C.C.; Kamat, V.R. From BIM to digital twins: A systematic review of the evolution of intelligent building representations in the AEC-FM industry. J. Inf. Technol. Construct. 2021, 26, 58–83. [Google Scholar] [CrossRef]
  20. Boje, C.; Guerriero, A.; Kubicki, S.; Rezgui, Y. Towards a semantic Construction Digital Twin: Directions for future research. Autom. Constr. 2020, 114, 103179. [Google Scholar] [CrossRef]
  21. Calvetti, D.; Mêda, P.; Chichorro Gonçalves, M.; Sousa, H. Worker 4.0: The future of sensored construction sites. Buildings 2020, 10, 169. [Google Scholar] [CrossRef]
  22. Sacks, R.; Brilakis, I.; Pikas, E.; Xie, H.S.; Girolami, M. Construction with digital twin information systems. Data-Centric Eng. 2020, 1, e14. [Google Scholar] [CrossRef]
  23. Stączek, P.; Pizoń, J.; Danilczuk, W.; Gola, A. A digital twin approach for the improvement of an autonomous mobile robots (AMR’s) operating environment—A case study. Sensors 2021, 21, 7830. [Google Scholar] [CrossRef] [PubMed]
  24. Fukushima, Y.; Asai, Y.; Aoki, S.; Yonezawa, T.; Kawaguchi, N. Digimobot: Digital twin for human-robot collaboration in indoor environments. In Proceedings of the 2021 IEEE Intelligent Vehicles Symposium (IV), Nagoya, Japan, 11–17 July 2021; pp. 55–62. [Google Scholar]
  25. Ibrahim, A.; Sabet, A.; Golparvar-Fard, M. BIM-driven mission planning and navigation for automatic indoor construction progress detection using robotic ground platform. In Proceedings of the EC3 Conference 2019; University College Dublin: Dublin, Ireland, 2019; Volume 1, pp. 182–189. [Google Scholar]
  26. Song, C.; Wang, K.; Cheng, J.C. BIM-aided scanning path planning for autonomous surveillance uavs with lidar. In Proceedings of the ISARC—International Symposium on Automation and Robotics in Construction, Seoul, Korea, 29 June–2 July 2010; Volume 37, pp. 1195–1202. [Google Scholar]
  27. Follini, C.; Terzer, M.; Marcher, C.; Giusti, A.; Matt, D.T. Combining the robot operating system with building information modeling for robotic applications in construction logistics. In Proceedings of the Advances in Service and Industrial Robotics: Results of RAAD; Springer: Berlin/Heidelberg, Germany, 2020; pp. 245–253. [Google Scholar]
  28. Pauwels, P.; de Koning, R.; Hendrikx, B.; Torta, E. Live semantic data from building digital twins for robot navigation: Overview of data transfer methods. Adv. Eng. Inform. 2023, 56, 101959. [Google Scholar] [CrossRef]
  29. Meschini, S.; Iturralde, K.; Linner, T.; Bock, T. Novel applications offered by integration of robotic tools in BIM-based design workflow for automation in construction processes. In Proceedings of the CIB* IAARC W119 CIC 2016 Workshop, Munich, Germany, 31 August 2016. [Google Scholar]
  30. Kousi, N.; Gkournelos, C.; Aivaliotis, S.; Lotsaris, K.; Bavelos, A.C.; Baris, P.; Michalos, G.; Makris, S. Digital twin for designing and reconfiguring human–robot collaborative assembly lines. Appl. Sci. 2021, 11, 4620. [Google Scholar] [CrossRef]
  31. Kim, S.; Peavy, M.; Huang, P.C.; Kim, K. Development of BIM-integrated construction robot task planning and simulation system. Autom. Constr. 2021, 127, 103720. [Google Scholar] [CrossRef]
  32. Chen, J.; Lu, W.; Fu, Y.; Dong, Z. Automated facility inspection using robotics and BIM: A knowledge-driven approach. Adv. Eng. Inform. 2023, 55, 101838. [Google Scholar] [CrossRef]
  33. Kim, K.; Peavy, M. BIM-based semantic building world modeling for robot task planning and execution in built environments. Autom. Constr. 2022, 138, 104247. [Google Scholar] [CrossRef]
  34. Pan, Y.; Braun, A.; Brilakis, I.; Borrmann, A. Enriching geometric digital twins of buildings with small objects by fusing laser scanning and AI-based image recognition. Autom. Construct. 2022, 140, 104375. [Google Scholar] [CrossRef]
  35. Mohammadi, M.; Rashidi, M.; Mousavi, V.; Karami, A.; Yu, Y.; Samali, B. Case study on accuracy comparison of digital twins developed for a heritage bridge via UAV photogrammetry and terrestrial laser scanning, SHMII. In Proceedings of the 10th International Conference on Structural Health Monitoring of Intelligent Infrastructure, SHMII, Porto, Portugal, 30 June–2 July 2021; Volume 10. [Google Scholar]
  36. Shabani, A.; Skamantzari, M.; Tapinaki, S.; Georgopoulos, A.; Plevris, V.; Kioumarsi, M. 3D simulation models for developing digital twins of heritage structures: Challenges and strategies. Procedia Struct. Integr. 2022, 37, 314–320. [Google Scholar] [CrossRef]
  37. Sommer, M.; Seiffert, K. Scan methods and tools for reconstruction of built environments as basis for digital twins. In DigiTwin: An Approach for Production Process Optimization in a Built Environment; Springer: Berlin/Heidelberg, Germany, 2022; pp. 51–77. [Google Scholar]
  38. Yang, M.D.; Chao, C.F.; Huang, K.S.; Lu, L.Y.; Chen, Y.P. Image-based 3D scene reconstruction and exploration in augmented reality. Autom. Constr. 2013, 33, 48–60. [Google Scholar] [CrossRef]
  39. Fritsch, D.; Klein, M. Augmented reality 3D reconstruction of buildings–reconstructing the past. Int. J. Multim. Tools Appl. 2018. Available online: https://www.researchgate.net/publication/316114730_3D_preservation_of_buildings_-_Reconstructing_the_past (accessed on 4 September 2023).
  40. Izadi, S.; Kim, D.; Hilliges, O.; Molyneaux, D.; Newcombe, R.; Kohli, P.; Shotton, J.; Hodges, S.; Freeman, D.; Davison, A.; et al. Kinectfusion: Real-time 3d reconstruction and interaction using a moving depth camera. In Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, 16–19 October 2011; pp. 559–568. [Google Scholar]
  41. Kang, X.; Li, J.; Fan, X.; Wan, W. Real-Time RGB-D Simultaneous Localization and Mapping Guided by Terrestrial LiDAR Point Cloud for Indoor 3-D Reconstruction and Camera Pose Estimation. Appl. Sci. 2019, 9, 3264. [Google Scholar] [CrossRef]
  42. Chen, C.; Fragonara, L.Z.; Tsourdos, A. RoIFusion: 3D Object Detection From LiDAR and Vision. IEEE Access 2021, 9, 51710–51721. [Google Scholar] [CrossRef]
  43. Qi, C.R.; Liu, W.; Wu, C.; Su, H.; Guibas, L.J. Frustum pointnets for 3d object detection from rgb-d data. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 918–927. [Google Scholar]
  44. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. Pointnet++: Deep hierarchical feature learning on point sets in a metric space. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  45. Qi, C.R.; Litany, O.; He, K.; Guibas, L.J. Deep hough voting for 3d object detection in point clouds. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9277–9286. [Google Scholar]
  46. Misra, I.; Girdhar, R.; Joulin, A. An end-to-end transformer model for 3d object detection. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 2906–2917. [Google Scholar]
  47. Erabati, G.K.; Araujo, H. Li3detr: A lidar based 3d detection transformer. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 4250–4259. [Google Scholar]
  48. Bharathi, S.; Pareek, P.K.; Rani, B.S.; Chaitra, D. 3-Dimensional Object Detection Using Deep Learning Techniques. In Proceedings of the International Conference on Emerging Research in Computing, Information, Communication and Applications; Springer: Berlin/Heidelberg, Germany, 2023; pp. 309–319. [Google Scholar]
  49. Ahmadyan, A.; Hou, T.; Wei, J.; Zhang, L.; Ablavatski, A.; Grundmann, M. Instant 3D object tracking with applications in augmented reality. arXiv 2020, arXiv:2006.13194. [Google Scholar]
  50. BCA, D. Code on Accessibility in the Built Environment; Building and Construction Authority: Singapore, 2019. [Google Scholar]
  51. Poux, F. 5-Step Guide to Generate 3D Meshes from Point Clouds with Python. In Towards Data Science; 2020; Available online: https://orbi.uliege.be/bitstream/2268/254933/1/TDS_generate_3D_meshes_with_python.pdf (accessed on 4 September 2023).
  52. Moenning, C.; Dodgson, N.A. Intrinsic point cloud simplification. Proc. 14th GrahiCon 2004, 14, 23. [Google Scholar]
  53. Fernández-Martínez, J.L.; Tompkins, M.; Mukerji, T.; Alumbaugh, D. Geometric sampling: An approach to uncertainty in high dimensional spaces. In Proceedings of the Combining Soft Computing and Statistical Methods in Data Analysis; Springer: Berlin/Heidelberg, Germany, 2010; pp. 247–254. [Google Scholar]
  54. Poux, F.; Neuville, R.; Nys, G.A.; Billen, R. 3D point cloud semantic modelling: Integrated framework for indoor spaces and furniture. Remote Sens. 2018, 10, 1412. [Google Scholar] [CrossRef]
  55. Ning, X.; Li, F.; Tian, G.; Wang, Y. An efficient outlier removal method for scattered point cloud data. PloS ONE 2018, 13, e0201280. [Google Scholar] [CrossRef]
  56. Carrilho, A.; Galo, M.; Dos Santos, R.C. Statistical outlier detection method for airborne lidar data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 87–92. [Google Scholar] [CrossRef]
  57. Rusu, R.B.; Blodow, N.; Marton, Z.; Soos, A.; Beetz, M. Towards 3D object maps for autonomous household robots. In Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, 29 October–2 November 2007; pp. 3191–3198. [Google Scholar]
  58. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  59. Ester, M.; Kriegel, H.P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. Proc. KDD 1996, 96, 226–231. [Google Scholar]
  60. Bassier, M.; Vergauwen, M.; Poux, F. Point Cloud vs. Mesh Features for Building Interior Classification. Remote Sens. 2020, 12, 2224. [Google Scholar] [CrossRef]
  61. Bernardini, F.; Mittleman, J.; Rushmeier, H.; Silva, C.; Taubin, G. The ball-pivoting algorithm for surface reconstruction. IEEE Trans. Vis. Comput. Graph. 1999, 5, 349–359. [Google Scholar] [CrossRef]
  62. Kazhdan, M.; Bolitho, M.; Hoppe, H. Poisson Surface Reconstruction Eurographics Symposium on Geometry Processing. 2006. Available online: https://hhoppe.com/poissonrecon.pdf (accessed on 4 September 2023).
  63. Poux, F.; Billen, R. Voxel-based 3D Point Cloud Semantic Segmentation: Unsupervised Geometric and Relationship Featuring vs Deep Learning Methods. ISPRS Int. J. Geo-Inf. 2019, 8, 213. [Google Scholar] [CrossRef]
  64. Poux, F.; Mattes, C.; Kobbelt, L. Unsupervised segmentation of indoor 3D point cloud: Application to object-based classification. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 44, 111–118. [Google Scholar] [CrossRef]
  65. Poux, F.; Mattes, C.; Selman, Z.; Kobbelt, L. Automatic region-growing system for the segmentation of large point clouds. Autom. Constr. 2022, 138, 104250. [Google Scholar] [CrossRef]
  66. Prayash, H.H.; Shaharear, M.R.; Islam, M.F.; Islam, S.; Hossain, N.; Datta, S. Designing and Optimization of An Autonomous Vacuum Floor Cleaning Robot. In Proceedings of the 2019 IEEE International Conference on Robotics, Automation, Artificial-intelligence and Internet-of-Things (RAAICON), Dhaka, Bangladesh, 29 November–1 December 2019; pp. 25–30. [Google Scholar] [CrossRef]
  67. Samarakoon, S.M.B.P.; Muthugala, M.A.V.J.; Vu Le, A.; Elara, M.R. hTetro-Infi: A Reconfigurable Floor Cleaning Robot with Infinite Morphologies. IEEE Access 2020, 8, 69816–69828. [Google Scholar] [CrossRef]
  68. Pey, J.J.J.; Povendhan, A.P.; Pathmakumar, T.; Elara, M.R. Robot-aided Microbial Density Estimation and Mapping. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022; pp. 2265–2272. [Google Scholar] [CrossRef]
  69. Muthugala, M.A.V.J.; Samarakoon, S.M.B.P.; Elara, M.R. Online Coverage Path Planning Scheme for a Size-Variable Robot. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 5688–5694. [Google Scholar] [CrossRef]
  70. OpenRobotics. CleanerBot1. Available online: https://fuel.gazebosim.org/1.0/OpenRobotics/models/CleanerBot1 (accessed on 3 September 2023).
  71. OpenRobotics. CleanerBot2. Available online: https://fuel.gazebosim.org/1.0/OpenRobotics/models/CleanerBot2 (accessed on 3 September 2023).
  72. OpenRobotics. HospitalBot. Available online: https://fuel.gazebosim.org/1.0/OpenRobotics/models/HospitalBot| (accessed on 3 September 2023).
  73. OpenRobotics. iRobot Create. Available online: https://fuel.gazebosim.org/1.0/OpenRobotics/models/iRobotCreate (accessed on 4 September 2023).
  74. Wijegunawardana, I.; Muthugala, M.V.J.; Samarakoon, S.B.P.; Hua, O.J.; Padmanabha, S.G.A.; Elara, M.R. Insights from autonomy trials of a self-reconfigurable floor-cleaning robot in a public food court. J. Field Robot. 2024, 41, 811–822. [Google Scholar] [CrossRef]
  75. Al-Khazraji, H.; Nasser, A.R.; Hasan, A.M.; Al Mhdawi, A.K.; Al-Raweshidy, H.; Humaidi, A.J. Aircraft Engines Remaining Useful Life Prediction Based on A Hybrid Model of Autoencoder and Deep Belief Network. IEEE Access 2022, 10, 82156–82163. [Google Scholar] [CrossRef]
  76. Nasser, A.R.; Hasan, A.M.; Humaidi, A.J. DL-AMDet: Deep learning-based malware detector for android. Intell. Syst. Appl. 2024, 21, 200318. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed methodology.
Figure 1. Overview of the proposed methodology.
Buildings 14 01193 g001
Figure 2. Robots simulated in Digital Twin: (a) CleanerBot1 (b) CleanerBot2 (c) CleanerBot3 (d) iRobot-Create.
Figure 2. Robots simulated in Digital Twin: (a) CleanerBot1 (b) CleanerBot2 (c) CleanerBot3 (d) iRobot-Create.
Buildings 14 01193 g002
Figure 3. Built environments for simulation testing models based on Accessibility Design Guidelines: (a) kerb ramp, (b) walkway ramp, (c) six-seat table (fixed), (d) three-seat table (fixed), (e) wheelchair-friendly table, (f) residential corridor.
Figure 3. Built environments for simulation testing models based on Accessibility Design Guidelines: (a) kerb ramp, (b) walkway ramp, (c) six-seat table (fixed), (d) three-seat table (fixed), (e) wheelchair-friendly table, (f) residential corridor.
Buildings 14 01193 g003
Figure 4. Modelling techniques: (a) test site: 6- seater fixed round dining table, (b) newly developed 3D Polygonal Model, (c) 3D reconstructed Voxel mesh from PCD, (d) 3D reconstructed Poisson surface mesh from PCD.
Figure 4. Modelling techniques: (a) test site: 6- seater fixed round dining table, (b) newly developed 3D Polygonal Model, (c) 3D reconstructed Voxel mesh from PCD, (d) 3D reconstructed Poisson surface mesh from PCD.
Buildings 14 01193 g004
Figure 5. Evaluation of modelling techniques. (a) Site:6-seater fixed round dining table, (b) newly developed 3D Polygonal Model—Replicates data, (c) 3D reconstructed Voxel mesh from PCD: Loss of geometric curvature details, (d) 3D reconstructed Poisson surface mesh from PCD: Undulated and incomplete surfaces. Red areas indicate the loss of geometric features.
Figure 5. Evaluation of modelling techniques. (a) Site:6-seater fixed round dining table, (b) newly developed 3D Polygonal Model—Replicates data, (c) 3D reconstructed Voxel mesh from PCD: Loss of geometric curvature details, (d) 3D reconstructed Poisson surface mesh from PCD: Undulated and incomplete surfaces. Red areas indicate the loss of geometric features.
Buildings 14 01193 g005
Figure 6. Digital twin results of each robot in the kerb ramp environment. The red area indicates inaccessible space.
Figure 6. Digital twin results of each robot in the kerb ramp environment. The red area indicates inaccessible space.
Buildings 14 01193 g006
Figure 7. Digital twin results of each robot in the walkway ramp environment. The red area indicates inaccessible space.
Figure 7. Digital twin results of each robot in the walkway ramp environment. The red area indicates inaccessible space.
Buildings 14 01193 g007
Figure 8. Digital twin results of each robot in the six-seat table environment. Red areas indicate inaccessible spaces.
Figure 8. Digital twin results of each robot in the six-seat table environment. Red areas indicate inaccessible spaces.
Buildings 14 01193 g008
Figure 9. Digital twin results of each robot in the three-seat table environment. Red areas indicate inaccessible spaces.
Figure 9. Digital twin results of each robot in the three-seat table environment. Red areas indicate inaccessible spaces.
Buildings 14 01193 g009
Figure 10. Digital Twin results of each robot in the Wheelchair-Friendly table environment. Red areas indicate inaccessible spaces.
Figure 10. Digital Twin results of each robot in the Wheelchair-Friendly table environment. Red areas indicate inaccessible spaces.
Buildings 14 01193 g010
Figure 11. (a) Top view of the residential shared environment—corridor without humans. (b) Isometric view. (c) Top view of the residential shared environment—corridor with humans.
Figure 11. (a) Top view of the residential shared environment—corridor without humans. (b) Isometric view. (c) Top view of the residential shared environment—corridor with humans.
Buildings 14 01193 g011
Figure 12. Digital Twin results of each robot for clearing the corridor in the residential environment. Red areas indicate inaccessible spaces.
Figure 12. Digital Twin results of each robot for clearing the corridor in the residential environment. Red areas indicate inaccessible spaces.
Buildings 14 01193 g012
Figure 13. Digital Twin results of each robot for clearing the projection and turning in the corridor of the residential environment. Red areas indicate inaccessible spaces.
Figure 13. Digital Twin results of each robot for clearing the projection and turning in the corridor of the residential environment. Red areas indicate inaccessible spaces.
Buildings 14 01193 g013
Figure 14. Digital twin results of each robot for clearing the door in the residential environment. Red areas indicate inaccessible spaces.
Figure 14. Digital twin results of each robot for clearing the door in the residential environment. Red areas indicate inaccessible spaces.
Buildings 14 01193 g014
Table 1. Specifications of robots used in the digital twin simulations.
Table 1. Specifications of robots used in the digital twin simulations.
RobotLength (m)Width (m)Height (m)
CleanerBot11.320.911.19
CleanerBot21.260.790.97
CleanerBot30.670.851.23
iRobot-Create0.320.320.07
Table 2. Metrics and parameters used to evaluate the success of robot deployment in each environment based on the environments adhering to the Accessibility Design Guidelines 2019.
Table 2. Metrics and parameters used to evaluate the success of robot deployment in each environment based on the environments adhering to the Accessibility Design Guidelines 2019.
EnvironmentRobot Goal(s)Evaluation ParametersParametersWidth (W)/Height (H)
kerb RampClimb rampWidth and GradientSlope < 1:10W 900 mm
Walkway RampClimb rampWidth and GradientSlope < 1:12W 1200 mm
Six-Seat TableAccess beneath the furnitureWidth and HeightDia 1000 mmTable H 765 mm
Chair H 450 mm
Three-Seat TableAccess beneath the furnitureWidth and HeightDia 1000 mmTable H 765 mm
Chair H 450 mm
WF TableAccess beneath the furnitureHeightKneespace 480 mmH 680 mm
Residential DoorClear doorWidth-W 850 mm
Residential CorridorClear corridor
Turning
Width-W 1200 mm
Table 3. Total number of times each robot achieved a goal within the corresponding environment.
Table 3. Total number of times each robot achieved a goal within the corresponding environment.
RobotAchieved Goal Count
CleanerBot11
CleanerBot24
CleanerBot34
iRobot-Create5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ezhilarasu, A.; Pey, J.J.J.; Muthugala, M.A.V.J.; Budig, M.; Elara, M.R. Enhancing Robot Inclusivity in the Built Environment: A Digital Twin-Assisted Assessment of Design Guideline Compliance. Buildings 2024, 14, 1193. https://doi.org/10.3390/buildings14051193

AMA Style

Ezhilarasu A, Pey JJJ, Muthugala MAVJ, Budig M, Elara MR. Enhancing Robot Inclusivity in the Built Environment: A Digital Twin-Assisted Assessment of Design Guideline Compliance. Buildings. 2024; 14(5):1193. https://doi.org/10.3390/buildings14051193

Chicago/Turabian Style

Ezhilarasu, Anilkumar, J.J. J. Pey, M. A. Viraj J. Muthugala, Michael Budig, and Mohan Rajesh Elara. 2024. "Enhancing Robot Inclusivity in the Built Environment: A Digital Twin-Assisted Assessment of Design Guideline Compliance" Buildings 14, no. 5: 1193. https://doi.org/10.3390/buildings14051193

APA Style

Ezhilarasu, A., Pey, J. J. J., Muthugala, M. A. V. J., Budig, M., & Elara, M. R. (2024). Enhancing Robot Inclusivity in the Built Environment: A Digital Twin-Assisted Assessment of Design Guideline Compliance. Buildings, 14(5), 1193. https://doi.org/10.3390/buildings14051193

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop