Next Article in Journal
Novel Aromatic Estolide Esters from Biobased Resources by a Green Synthetic Approach
Previous Article in Journal
Sulfated Polysaccharides Isolated from Nacre Extract Suppress Chronic Scopolamine Administration-Induced Amyloid-Beta Deposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Robot Inclusivity of Buildings Based on Surface Unevenness

by
Charan Satya Chandra Sairam Borusu
1,
Matthew S. K. Yeo
1,
Zimou Zeng
1,
M. A. Viraj J. Muthugala
1,*,
Michael Budig
1,
Mohan Rajesh Elara
1 and
Yixiao Wang
2
1
Engineering Product Development Pillar, Singapore University of Technology and Design, Singapore 487372, Singapore
2
School of Industrial Design, Georgia Institute of Technology, Atlanta, GA 30332, USA
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(17), 7831; https://doi.org/10.3390/app14177831
Submission received: 7 July 2024 / Revised: 21 August 2024 / Accepted: 26 August 2024 / Published: 4 September 2024
(This article belongs to the Section Robotics and Automation)

Abstract

:
Mobile service robots experience excessive vibrations when travelling over uneven surfaces in their workspace, increasing the degradation rate of the mechanical components or disrupting the robot’s sensing abilities for proper localization and navigational capabilities. Robot inclusivity principles can determine the suitability of a site for robot performance by considering the ground’s unevenness. This paper proposes a novel framework to autonomously evaluate the Robot Inclusivity Level of buildings based on surface unevenness (RIL-SU) by quantifying the surface unevenness of floor surfaces. The surface unevenness values are converted to RIL-SU using a rule-based approach, and the corresponding RIL-SU is tagged to the map location. A coloured heatmap based on the RIL-SU values is created as a visual representation of the RIL-SU of a given space. This heatmap would be useful for modifying the environment to make it more robot-friendly or restrict the robot’s operation in certain areas to avoid possible robot failures. The experimental results show that the proposed framework can successfully generate a valid RIL-SU heatmap for building environments.

1. Introduction

There has been a significant rise in the deployment of autonomous mobile robots in indoor environments, encompassing multiple applications such as inspection [1], cleaning [2], logistics [3], and assistance applications [4]. Advancements in robot technologies and artificial intelligence have brought about a significant change in the way repetitive, time-consuming, and exhaustive tasks are performed by humans [5].
In these varied applications, navigation is a pivotal function undertaken by robots [6,7]. Mapping and localization are essential for effective navigation [8,9]. Mapping involves the creation of spatial representations of the robot’s environment, enabling it to determine obstacles and plan paths for goals. Meanwhile, localization is crucial for the robot to determine its precise position within the mapped environment [10,11]. This relationship between mapping and localization ensures accurate and reliable navigation, enabling robots to move around indoor spaces to accomplish tasks efficiently. Moreover, the significance of localization extends beyond navigation; it directly influences the quality and efficacy of mapping efforts, as precise localization facilitates the creation of detailed and reliable maps essential for successful robotic operations in diverse applications [12,13]. Thus, within autonomous robotics, the synergy between navigation, mapping, and localization is crucial for achieving robust and adaptive performance in diverse indoor environments.
Sensor fusion is the main method of improving localization accuracy in robotic systems [14,15]. Various sensors such as Light Detection and Ranging (LiDAR), Inertial Measurement Unit (IMU), and wheel encoders are fused together for improving the localization accuracy of indoor robots [15]. LiDAR sensors visualize the robot’s surroundings in the form of point cloud data [16], while wheel encoders send information related to the distance navigated by the robot, and IMU provides information on the robot’s direction, acceleration, and angular velocity [17]. In addition to LiDAR, an optical camera can also be used to sense the robot’s surroundings to create a map and measure the depth of the object from the robot’s base [18]. The fusion algorithms combine the data from these sensors to compensate for the errors in each sensor to develop a reliable localization system, improving accuracy and reducing uncertainty. A few common sensor fusion techniques used for localization are particle filters [19] and Kalman filters [20].
Despite advanced sensor fusion methods and algorithms, there is still the probability for failure in accurate localization and mapping. Unevenness on surfaces like rugged or sudden steps can lead to imprecise sensor measurements. Uncertainty in the robot’s deployment space can lead to misalignment in sensors’ positions. This, in turn, affects the robot’s localization and its navigation performance, leading to the robot’s failure in performing the given tasks [21,22]. The reliability of localization can also be improved by integrating the additional sensors into the robots’ hardware or hybridizing multiple sensor inputs together [23]. Furthermore, artificial intelligence techniques such as deep learning methods [24] and reinforcement techniques [25] are explored to enhance a robot’s localization accuracy. Even with advanced algorithms and the addition of the latest sensors to improve localization accuracy, there is still a chance of failure or inaccurate localization due to uneven surfaces. Furthermore, the prolonged operation of robots on uneven surfaces increases the degradation or failure of a robot’s hardware system [26].
There should be a balance struck between improvements in the hardware and software of robots along with changes to be made to the infrastructure or environmental conditions where the robot is deployed [27,28]. The Design for Robot (DfR) method is an evolving research field that concentrates on infrastructural changes and environmental conditions to be altered in the deployment region of operation for the improved performance of robots without complicating the robot’s system architecture [29,30,31].
The key aspects considered in the DfR methodology are infrastructural adaptations, accessibility for robot operation, human–robot interaction, safety and ethical considerations, and collaboration with multiple stakeholders [32,33]. In other words, this research niche aims to improve the robot’s inclusivity in deployment areas to minimize failures and enhance the performance of robots in real-world applications. However, little or no research has been conducted to develop an autonomous robotic system to evaluate the inclusivity of a building for robots based on variations in surface unevenness (SU).
This paper proposes a novel method to autonomously determine the Robot Inclusivity Level (RIL) in an environment based on variations in SU. Section 2 details the factors being considered for evaluating RIL in terms of the ground surface unevenness (defined as RIL-SU). Section 3 outlines the process for generating RIL-SU maps for a given environment. Section 4 discusses the experiment and validation of the proposed framework for the generation of RIL-SU maps. Lastly, Section 5 concludes the paper.

2. Evaluating Robot Inclusivity

2.1. Effect of Surface Unevenness/Vibration on Localization

For autonomous mobile robots, uneven terrains could cause localization and path planning issues while they are performing their operation [34]. These robots perceive their surroundings using sensors such as Inertial Measurement Units (IMU), LIDARs, cameras, and wheel encoders. Uneven surfaces can introduce errors in the data collected by these sensors, leading to the robot’s failure in sensing its location and orientation and its eventual execution of tasks [35] (examples are given in Figure 1). Prominent errors/issues posed to sensing related to localization by unevenness are as follows:
  • Wheel slippage
    On uneven terrain, wheel slippage can occur, causing inaccuracies in odometry-based localization methods that rely on precise wheel rotations [36].
  • Poor feature detection
    Accurately perceiving features used for localization, such as landmarks or visual cues, may be impeded due to the vibrations caused to the vision system, making it difficult for the robot to determine its position accurately [37].
  • Unpredictable sensor data fluctuation
    Uneven terrain can cause unpredictable fluctuations in sensor readings, such as from wheel encoders or inertial measurement units (IMUs), leading to inconsistencies in localization through prediction models [38].
To minimize localization errors in autonomous mobile robots traversing rough terrain, a strategy combining cutting-edge technology and design techniques must be used. Adaptive algorithms and sensor fusion work when effectively combined [15,39]. State estimation accuracy can be improved using methods like the data filters and fusion to integrate different sensor data. Terrain adaptive algorithms limit the impact of sensor noise after adjusting the robot according to the surface properties of its work environment. Navigation accuracy may be improved even further by designing robots with suspension systems and other devices to manage uneven surfaces. When combined, these tactics enable more dependable and effective robot performance in unpredictable environments. However, these methods may be computationally expensive, complex in design, or require multiple sensors working in tandem. Robot-inclusive design principles as laid out under the DfR approach could be used to mitigate this issue.
Apart from the localization issues, the continuous exposure of robots to vibrations resulting from uneven surfaces leads to heightened mechanical stress, accelerating the deterioration of various components over time [40]. These vibrations can cause fatigue in critical parts, such as joints and motors, potentially shortening the operational lifespan of the robot. Moreover, repeated exposure to such vibrations may also induce structural damage, compromising the overall performance and reliability of the robot in the long term. Thus, mitigating the effects of surface unevenness becomes essential to preserve the longevity and functionality of the robot in diverse operating environments.
By assessing the surface unevenness of the deployment site, the proportion of robot performance failures can be reduced through rectification and modification of the environment for robot deployments or by planning paths to avoid poorly inclusive areas. This approach can enhance the productivity of robots by minimizing failures.

2.2. Measuring Surface Unevenness Using IMU

Surface unevenness can be the result of various factors in a building environment. There are three main distinguishing factors: surface inclinations, steps, and surface roughness. The unevenness of the surface causes vibrations that moves the robot along the three-dimensional axes. An IMU can be used to continuously measure these data.
While the robot navigates through the environment, the IMU sensors collect the orientation as quaternion (q) data, linear accelerations ( a ) , and angular velocities ( ω ) along the three axes. Algorithm 1 details the use of IMU data to measure the surface unevenness as a single parameter.
Algorithm 1 Algorithm for path unevenness index calculation
  • Input: IMUData
  • Output: UI
  • n buffer _ size
  • α filter _ factor
  • while Robot navigates do
  •     Call IMU_data
  • end while 
  • function IMU_data(q,a, ω )
  •      q f , a f , ω f FilteringNoise ( q , a , ω )
  •      ( ψ , θ , ϕ ) EulerAngleConverter ( q f )
  •      | a | ( l 1 a x ) 2 + ( l 2 a y ) 2 + ( l 3 a z ) 2
  •      | ω | ( l 4 l x ) 2 + ( l 5 l y ) 2 + ( l 6 l z ) 2
  •     Call   BufferLoader
  •     Call   UnevenessCalculator
  • end function 
  • function BufferLoader( ψ , θ , ϕ , | a | , | ω | )
  •     for  no . of data points < buffersize  do
  •          buffer _ array load data
  •         return  buffer _ array
  •     end for
  • end function 
  • function UnevennessIndexCalculator
  •     Calculate standard deviation buffer_array
  •      U σ ψ + σ θ + σ ϕ + σ | a | + σ | ω | + f
  • end function
Kalman filter and a low-pass filter are applied to eliminate the unwanted noise added to the IMU data. The Kalman filter generates the estimates of unknown variables that are often more accurate than those based on a single measurement alone by using a sequence of measurements taken over time, which may contain statistical noise and other errors. To achieve optimal filtering performance, the measurement noise covariance matrix for the factors is set to diag( 10 8 , 10 8 , 10 8 ), and the process noise covariance matrix is also set to diag( 10 8 , 10 8 , 10 8 ). These parameters are tuned by observing the performance for nullifying the uncertainty and filtering out the noise. The low-pass filter attenuates the high-frequency noise in IMU raw data especially during the robot’s motion. The analysis of the gathered IMU data is used to determine the cut-off frequency, ensuring that the specified frequency range optimizes the relevant signal properties while minimizing noise and undesirable frequencies. It eliminates the unwanted interference caused between similar patterns in the IMU to smoothen the data. The corresponding filtered data are denoted as q f , a f , and ω f .
The quaternion (q) data representing the robot orientation are transformed to Euler angles, which are more flexible for illustrations in three-dimensional space, making the data more logical for analysis. Pitch ( θ ) , roll ( ψ ) , and yaw ( ϕ ) are the orientation components after converting to Euler angles.
The linear acceleration and angular velocity vectors along the X, Y, and Z axes are computed into two single values representing the overall intensity of the robot’s movement (denoted by | a | and | ω | ), considering weighted root mean square. This type of abstraction is essential for spotting movement variances that might point to uneven surfaces. l 1 , l 2 , l 3 , l 4 , l 5 , and l 6 are the weighting factors associated with each data component. The weight factors l 1 to l 6 can be used to control the effects of linear acceleration and angular velocities along the X, Y, and Z axes to obtain compound scores for linear acceleration and angular velocity. These weight factors should be adjusted according to the specifications of the robot used for the autonomous auditing of robot inclusivity levels in a building’s infrastructure. For example, if the audit robot has a vibration suppression feature that acts on the X axis, a higher weight should be assigned to the corresponding factors to nullify the effect of the vibration-suppressing mechanism.
With the sliding window approach, the collection of current orientations ( ( θ ) , ( ψ ) , ( ϕ ) ), linear acceleration ( | a | ), and angular velocity ( | ω | ) points are continually gathered and updated. A buffer size decides the total number of values taken in evaluating the surface unevenness (U).
At the last step of this procedure, U is computed as a single parameter value from the standard deviations of pitch, roll, yaw, linear acceleration, angular velocity, and a factor (denoted by σ ψ , σ θ , σ ϕ , σ | a | , σ | ω | , f ). Here, the factor f is an experimentally chosen constant per the robot specifications. This metric formulation is flexible, enabling alterations in response to observations and testing conducted in the actual world. This adaptability guarantees that the surface unevenness is estimated to precisely represent the actual unevenness level of the path being travelled by the robot auditing the zone.

2.3. Determining the Robot Inclusivity Level (RIL)

The RIL (Robot Inclusivity Level) of a particular location indicates how well a robot can traverse the area without encountering performance issues. Thus, the RIL is crucial for evaluating the suitability of a building environment for a robot’s optimal performance. For example, a low value of U suggests fewer issues for robots’ navigation. Therefore, a higher RIL-SU should be assigned to areas with a low U, while areas with a high U should be assigned a lower RIL, as a high U can lead to navigation issues such as localization errors. Based on these observations, a rule-based framework is developed to categorize RIL-SU with respect to surface unevenness into three levels: High RIL, Moderate RIL, and Low RIL.
Let U X , Y be the surface unevenness value recorded for the location ( X , Y ) of the environment. Surfaces that are primarily level and smooth are represented by environments with low values of unevenness, which are less than the threshold U L . This correlates with a ’High’ RIL, as in (1), enabling robots to perform navigation tasks with minimal issues in that location.
The importance of RIL-SU becomes more pronounced as unevenness values increase, falling between U L and U U , indicating greater terrain irregularity and slight elevation changes. This presents navigational challenges for robots. To navigate these conditions successfully, robots with intermediate inclusiveness scores require advanced mapping and localization algorithms, sophisticated sensors, and versatile motion mechanisms (such as adjustable suspension systems), especially if the environment cannot be modified to mitigate these challenges. Therefore, a ‘Moderate’ RIL is assigned to these locations.
A surface with high unevenness implies a high probability of failure when the robot traverses that location. Such excessive unevenness may be caused by cables on the floor and rough floor finishes, such as coarse gravel or uneven stone tiles used for the floor surface. This indicates that the unevenness exceeds the threshold U U , thus resulting in a ‘Low’ RIL being assigned to such surface conditions.
RIL = High If U X , Y < U L Moderate If U L < U X , Y < U U Low If U U < U X , Y
The thresholds U L and U U are determined empirically by running a robot through several rounds under various unevenness conditions. The values defined for U L and U U are 0.09 and 2, respectively. These threshold levels should be adjusted according to the features of the robot being considered, such as the availability of vibration-suppressing mechanisms. The threshold levels defined in this paper are suitable for most mobile robots without vibration-suppressing mechanisms.

3. Automated RIL Map Generation

3.1. Robot Platform

The proposed framework for the autonomous evaluation of the RIL-SU of buildings was implemented on the Meerkat robot platform shown in Figure 2. Meerkat is an autonomous mobile robot with a differential drive mechanism for navigational tasks. The platform’s sensors and embedded systems ensure reliable navigation, accurate positioning, and effective environmental mapping. DC motors drive the robot wheels (Oriental Motor driver, Elk Grove Village, IL, USA), and the robot’s odometry is calculated from the data received from the encoders. The LIDAR sensor (Sick TiM581-2050101, Waldkirch, Germany)) mounted on the robot is utilized for creating maps of the environment, identifying obstacles, devising safe routes, and precise navigation. An IMU (VectorNav VN-100, Dallas, TX, USA) positioned at the base of the Meerkat robot ensures optimal results in calculating the robot’s orientation and surface unevenness (U). A stereo vision camera (ZED 2, Stereolabs, San Francisco, CA, USA) is also installed on the robot and can be used for performing vision-based tasks such as object tracking and detection. A high-performance CPU is onboard to ensure seamless communication between integrated sensors and the computational processing of data in real time to perform autonomous tasks with a higher efficiency.
The system architecture of the robot is shown in Figure 3 and was developed using a Robot Operating System (ROS). The robot sensor system sends the data to the ROS architecture of the embedded system. The scan filter node filters out the laser scan data received from the ‘lidar’, and the ‘gmapping’ algorithm [41] is used for generating the map of the deployment region. The Extended Kalman Filter (EKF) in the localization package rectifies the error in odometry, and the Adaptive Monte Carlo Localization (AMCL) [42] package makes the robot localize in the given map. The ROS nodes in the ROS architecture send and receive the necessary information via ROS topics to perform autonomous navigation in the deployment region.

3.2. Tagging RIL in a Map

This section describes the process that combines data collection, processing, and visualization inside a robotic environment to generate the RIL-SU map. It utilizes a ROS-based framework that generates a RIL-SU map. This process is broken down into the following discrete steps:
  • Collect the data from the IMU, LIDAR, and wheel encoder sensors while performing the zigzag navigation inside the environment.
  • Calculate the surface unevenness (U) score from the IMU data. A dedicated node is created to process the data to obtain U in a particular region.
  • Convert the recorded U to RIL-SU.
  • Tag the RIL-SU onto the corresponding locations of the map generated from the Simultaneous Localization and Mapping (SLAM) system.
  • By using the Matplot-lib library system, process all the data to generate the RIL-SU heatmap of the deployed area.
The RIL-SU map is colour-coded to specify High, Moderate, and Low RIL zones in the considered area.

4. Experimental Validation

4.1. Experimental Design

Experiments were conducted considering three building environments: a study area, a printing room, and a semi-outdoor connecting bridge at the Singapore University of Technology and Design. Equivalent weight values were used for l 1 to l 6 in the experiments since the audit robot used in this study does not contain axis-specific vibration-suppression features.
The experiments were conducted in two phases. In the first phase, the Meerkat robot was slowly moved in a zigzag pattern for area coverage at a site to audit robot inclusivity. This allowed the robot to extract data related to surface unevenness and subsequently generate the RIL-SU heatmaps for the site. After generating the RIL-SU heatmap, the compliance of the map with real situation was analysed by conducting a validation test in the second phase. These sites were chosen as robots (of telepresence and cleaning functions) would be deployed within them in the near future.

4.2. Test Site 1: Study Area

The study area was a multi-purpose space with synthetic polymer material as the predominant floor material. Challenges posed by environmental conditions included a carpeted area in one part of the study area and a difference in floor levels near access doors to technical rooms. The generated LiDAR map of the study area and the path taken by the robot during the auditing process are shown in Figure 4a. The corresponding unevenness of the surface (U), measured using Algorithm 1, is plotted onto the map as depicted in Figure 4b. The resultant RIL-SU heatmap for this site is presented in Figure 4c.
To validate the generated RIL-SU heatmap, three sample zones with different RILs were selected in the second phase. The selected zones are represented as ‘L1’, ‘L2’, and ‘L3’ in Figure 4b,c. The robot traversed through the selected locations at its nominal speed. Location ‘L1’ in the surface unevenness map suggested that U was around 0.5. This low surface unevenness was compliant with manual observations of the environment, as this zone’s surface area was smooth without uneven bumps, as shown in Figure 5a. According to the generated RIL-SU heatmap, location ‘L1’ was tagged as ‘High’ RIL. When the robot traversed through this location, no distortion of the SLAM was observed, validating the high robot inclusivity at the location.
Location ‘L2’ was a slightly bumpy area due to the height difference of the floor mat and the floor. Thus, a U value around 1.5 was observed (see unevenness heatmap in Figure 4b). As a result of this moderately elevated U, the RIL of that location was tagged as ‘Moderate’ (see Figure 4c). When the robot traversed this location, a drift on the laser scan (Figure 5b) was observed, indicating the moderate RIL.
According to the RIL heatmap, location ‘L3’ was a ‘Low’ RIL zone. Here, the surface area was highly uneven for the robot to traverse, as shown in Figure 5c. When the robot traversed here, a huge distortion of the SLAM could be observed, implying navigation failure. This result is in agreement with the generated RIL-SU heatmap.

4.3. Test Site 2: Printing Room

The printing room had a skewed trapezoidal floor plan and a divider wall at the centre. Obstacles in the form of raised rubber mats and cardboard sheets were placed to simulate ground bumps and infrastructure such as ground power outlet covers and raised platforms that the robot can traverse over with a risk to its localization capabilities.
The generated LiDAR map of the printing room and the path taken by the robot is shown in Figure 6a. The corresponding unevenness of the surface (U), measured using Algorithm 1, was plotted onto the map as depicted in Figure 6b. The resultant RIL-SU heatmap for this site is presented in Figure 6c.
To validate the RIL-SU heatmap, three different zones with different RIL-SU values were selected in the site similar to the previous experiment. The selected zones are represented as ‘L4’, ‘L5’, and ‘L6’ in Figure 6b,c. The robot was made to pass through the selected locations at its nominal speed and also made to run multiple times around each zone. At location ‘L4’, there were no external materials placed on the floor. Hence, the unevenness of the surface was low, which is shown in Figure 6b. While experimenting at L4 in Figure 7, the Meerkat robot mapping and localization were precise, and there were no navigation issues, as seen in Figure 7a, which indicates the robot’s high inclusivity in this zone. This validates the High RIL in Figure 6c.
Location ‘L5’ was chosen where medium-raised rubber mats were placed, as seen in Figure 7b. Thus, a U value of around 2 was observed (see unevenness heatmap in Figure 6b). As a result of this moderately elevated U, the RIL of that location was tagged as ‘Moderate’ (see Figure 6c). The Meerkat robot mapping and localization were precise, and there were no navigation issues, which indicates the robot’s high inclusivity level in that zone. The robot laser scan initially did not show any drift, but after traversing through that zone multiple times over the period, there were observations of accumulated drift, leading to the medium inclusivity, which validates the RIL in Figure 6c.
Finally, at location ‘L6’ in Figure 7c, there were multiple uneven bumpy tiles placed in this zone. The U in this zone was higher (see Figure 6b), and the inclusivity was tagged as low (see Figure 6c). The Meerkat robot laser scan drifted by a higher range once the robot tried to traverse across this zone (see Figure 7). It implies the robot’s ineffectiveness in reaching its destination coordinates indicating ‘Low’ inclusivity. The results are in agreement with the generated RIL-SU heatmap.

4.4. Test Site 3: Connector Bridge Area

The connector bridge is part of a network of outdoor corridors and terraces that link up different campus buildings and is exposed to weather. It contains two primary materials, areas with natural wood planing and areas with cementitious materials, and varying levels of surface roughness.
The generated LiDAR map of the connector bridge and the path taken by the robot is seen in Figure 8a. The corresponding unevenness of the surface (U), measured using Algorithm 1, is plotted onto the map as depicted in Figure 8b. The resultant RIL-SU heatmap for this site is presented in Figure 8c.
To validate the RIL-SU heatmap, three different zones with different RIL-SU values were selected in the site similar to the previous experiment. The selected zones are represented as ‘L7’, ‘L8’, and ‘L9’ in Figure 8b,c. The robot was made to pass through the selected locations at its nominal speed and also made to run multiple times around each zone. At location ‘L7’, the cementitious material holds enough friction between the wheels and the floor for the robot not to undergo slippage and produce any jerk. Hence, the unevenness of the surface is low, which is replicated in Figure 8b. While experimenting at L7, the Meerkat robot mapping and localization were precise, and there were no navigation issues as seen in Figure 9a, which indicates the robot’s high inclusivity in this zone. This validates the High RIL in Figure 8c.
Location ‘L8’ is chosen where there are wooden planks with an uneven gap in Figure 9b. Thus, a U value of around 1 was observed (see unevenness heatmap in Figure 8b). As a result of this moderately elevated U, the RIL of that location was tagged as ‘Moderate’ (see Figure 8c). The Meerkat robot mapping and localization were precise, and there were no navigation issues, which indicates the robot’s high inclusivity level in that zone. The robot laser scan initially did not show any drift, but after traversing through that zone multiple times over the period, there were observations of accumulated drift, leading to the medium inclusivity, which validates the RIL in Figure 8c.
Finally, at location ‘L9’ in Figure 9c, there is a slightly broken wooden plank, which makes the zone more uneven. The U in this zone is higher (see Figure 6b), and the inclusivity is tagged as low (see Figure 6c). The Meerkat robot laser scan drifts by a higher range once the robot tries to traverse across this zone (see Figure 9). It implies the robot’s ineffectiveness in reaching its destination coordinates, indicating ’Low’ inclusivity. The results are in agreement with the generated RIL-SU heatmap.

5. Conclusions

This paper proposes a novel framework for autonomously evaluating the RIL-SU of a building based on surface unevenness. In this regard, the surface unevenness of the work site for mobile service robots as an operational factor is analysed under the robot inclusivity principles. The proposed framework measures the surface unevenness using an IMU installed on a robot. Excessive vibrations caused by surface unevenness could disrupt the sensing abilities of a robot for proper localization and increase the wear and tear of mechanical components. The surface unevenness is then converted to RIL-SU using a rule-based approach. Finally, the framework can generate RIL-SU heatmaps for a given building environment. The proposed framework was implemented on a robot for the autonomous auditing of RIL-SU, and experiments were conducted to validate the compliance of the generated RIL-SU heatmaps. The experimental results show that the RIL-SU heatmaps generated by the proposed framework are acceptable.
These RIL-SU heatmaps would be useful for informing building owners about the locations of problematic zones for robot deployments or for supporting good human–robot interaction experiences, enabling possible rectification and further evaluations when necessary. These rectifications involve restricting robot operations in low-RIL-SU areas or modifying the building environment to improve RIL-SU in those areas. While certain deviations and defects cannot be avoided in construction, other design-related areas can be easily improved without compromising human comfort. Feasible design modifications to increase robot inclusivity in building environments may include aspects such as material selection for the ground surface, minimizing the need for steps, level changes and inclined surfaces, and creating barrier-free zones demarcated by different surface levels for robot deployments. However, there needs to be a balance between robot performance and its technical/economic limitations, as well as the constraints posed by the building environment.

Author Contributions

Conceptualization: M.R.E., C.S.C.S.B. and M.A.V.J.M.; Data curation: C.S.C.S.B., Z.Z. and M.S.K.Y.; Formal analysis: C.S.C.S.B., M.A.V.J.M., M.B. and Y.W.; Funding acquisition: M.R.E.; Investigation: C.S.C.S.B., Z.Z. and M.S.K.Y.; Methodology: C.S.C.S.B., M.S.K.Y. and M.A.V.J.M.; Project administration: M.R.E.; Resources: M.R.E.; Software: C.S.C.S.B. and Z.Z.; Supervision: M.R.E. and M.A.V.J.M.; Validation: C.S.C.S.B. and Z.Z.; Visualization: M.S.K.Y. and C.S.C.S.B.; Writing—original draft: C.S.C.S.B. and M.S.K.Y.; Writing—review and editing: M.A.V.J.M., M.B. and Y.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Robotics Programme under its National Robotics Programme (NRP) BAU, Ermine III: Deployable Reconfigurable Robots, Award No. M22NBK0054, and also supported by A*STAR under its “RIE2025 IAF-PP Advanced ROS2-native Platform Technologies for Cross-sectorial Robotics Adoption (M21K1a0104)” programme.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Gibb, S.; Le, T.; La, H.M.; Schmid, R.; Berendsen, T. A multi-functional inspection robot for civil infrastructure evaluation and maintenance. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017; pp. 2672–2677. [Google Scholar]
  2. Wijegunawardana, I.; Muthugala, M.V.J.; Samarakoon, S.B.P.; Hua, O.J.; Padmanabha, S.G.A.; Elara, M.R. Insights from autonomy trials of a self-reconfigurable floor-cleaning robot in a public food court. J. Field Robot. 2024, 41, 811–822. [Google Scholar] [CrossRef]
  3. Ramdani, N.; Panayides, A.; Karamousadakis, M.; Mellado, M.; Lopez, R.; Christophorou, C.; Rebiai, M.; Blouin, M.; Vellidou, E.; Koutsouris, D. A safe, efficient and integrated indoor robotic fleet for logistic applications in healthcare and commercial spaces: The endorse concept. In Proceedings of the 2019 20th IEEE International Conference on Mobile Data Management (MDM), Hong Kong, China, 10–13 June 2019; pp. 425–430. [Google Scholar]
  4. Cooper, S.; Di Fava, A.; Vivas, C.; Marchionni, L.; Ferro, F. ARI: The social assistive robot and companion. In Proceedings of the 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 745–751. [Google Scholar]
  5. Ness, S.; Shepherd, N.J.; Xuan, T.R. Synergy Between AI and Robotics: A Comprehensive Integration. Asian J. Res. Comput. Sci. 2023, 16, 80–94. [Google Scholar] [CrossRef]
  6. Trulls, E.; Corominas Murtra, A.; Perez-Ibarz, J.; Ferrer, G.; Vasquez, D.; Mirats-Tur, J.M.; Sanfeliu, A. Autonomous navigation for mobile service robots in urban pedestrian environments. J. Field Robot. 2011, 28, 329–354. [Google Scholar] [CrossRef]
  7. Gross, H.M.; Boehme, H.J.; Wilhelm, T. Contribution to vision-based localization, tracking and navigation methods for an interactive mobile service-robot. In Proceedings of the 2001 IEEE International Conference on Systems, Man and Cybernetics. e-Systems and e-Man for Cybernetics in Cyberspace (Cat. No. 01CH37236), Tucson, AZ, USA, 7–10 October 2001; Volume 2, pp. 672–677. [Google Scholar]
  8. Builes, J.A.J.; Amaya, G.A.; Velásquez, J.L. Autonomous navigation and indoor mapping for a service robot. Investigación Innovación Ingenierías 2023, 11, 28–38. [Google Scholar] [CrossRef]
  9. Zanuar, R.M.; Purnama, I.K.E.; Purnomo, M.H. Autonomous navigation and obstacle avoidance for service robot. In Proceedings of the 2019 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM), Surabaya, Indonesia, 19–20 November 2019; pp. 1–8. [Google Scholar]
  10. Corominas Murtra, A. Map-Based Localization for Urban Service Mobile Robotics. Ph.D. Thesis, Universitat Politècnica de Catalunya (UPC), Barcelona, Spain, 2011. [Google Scholar]
  11. Rone, W.; Ben-Tzvi, P. Mapping, localization and motion planning in mobile multi-robotic systems. Robotica 2013, 31, 1–23. [Google Scholar] [CrossRef]
  12. Lee, D.; Chung, W.; Kim, M. Autonomous map building and smart localization of the service robot PSR. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No. 03CH37453), Las Vegas, NV, USA, 27–31 October 2003; Volume 1, pp. 454–459. [Google Scholar]
  13. Ekvall, S.; Kragic, D.; Jensfelt, P. Object detection and mapping for service robot tasks. Robotica 2007, 25, 175–187. [Google Scholar] [CrossRef]
  14. Nagla, K.; Uddin, M.; Singh, D. Multisensor data fusion and integration for mobile robots: A review. IAES Int. J. Robot. Autom. 2014, 3, 131–138. [Google Scholar] [CrossRef]
  15. Alatise, M.B.; Hancke, G.P. A review on challenges of autonomous mobile robot and sensor fusion methods. IEEE Access 2020, 8, 39830–39846. [Google Scholar] [CrossRef]
  16. Olvera, T.; Orozco-Rosas, U.; Picos, K. Mapping and navigation in an unknown environment using LiDAR for mobile service robots. In Proceedings of the Optics and Photonics for Information Processing XIV. SPIE, Bellingham, WA, USA, 24 August–4 September 2020; Volume 11509, pp. 31–45. [Google Scholar]
  17. Brossard, M.; Bonnabel, S. Learning wheel odometry and IMU errors for localization. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 291–297. [Google Scholar]
  18. Biswas, J.; Veloso, M. Depth camera based indoor mobile robot localization and navigation. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 1697–1702. [Google Scholar]
  19. Fox, D.; Thrun, S.; Burgard, W.; Dellaert, F. Particle filters for mobile robot localization. In Sequential Monte Carlo Methods in Practice; Springer: Berlin/Heidelberg, Germany, 2001; pp. 401–428. [Google Scholar]
  20. Xu, X.; Pang, F.; Ran, Y.; Bai, Y.; Zhang, L.; Tan, Z.; Wei, C.; Luo, M. An indoor mobile robot positioning algorithm based on adaptive federated Kalman Filter. IEEE Sens. J. 2021, 21, 23098–23107. [Google Scholar] [CrossRef]
  21. Rekleitis, I.M.; Dudek, G.; Milios, E.E. Multi-robot cooperative localization: A study of trade-offs between efficiency and accuracy. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 30 September–4 October 2002; Volume 3, pp. 2690–2695. [Google Scholar]
  22. Wei, Z.; Li, B.; Guo, W.; Hu, W.; Zhao, C. On the Accuracy and Efficiency of Sensing and Localization for Robotics. IEEE Trans. Mob. Comput. 2020, 21, 2480–2492. [Google Scholar] [CrossRef]
  23. Gutmann, J.S.; Burgard, W.; Fox, D.; Konolige, K. An experimental comparison of localization methods. In Proceedings of the Proceedings. 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems. Innovations in Theory, Practice and Applications (Cat. No. 98CH36190), Victoria, BC, Canada, 17 October 1998; Volume 2, pp. 736–743. [Google Scholar]
  24. Nilwong, S.; Hossain, D.; Kaneko, S.i.; Capi, G. Deep learning-based landmark detection for mobile robot outdoor localization. Machines 2019, 7, 25. [Google Scholar] [CrossRef]
  25. Garrote, L.; Torres, M.; Barros, T.; Perdiz, J.; Premebida, C.; Nunes, U.J. Mobile robot localization with reinforcement learning map update decision aided by an absolute indoor positioning system. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 1620–1626. [Google Scholar]
  26. Wijegunawardana, I.D.; Samarakoon, S.B.P.; Muthugala, M.V.J.; Elara, M.R. FMEA-Based Coverage-Path-Planning Strategy for Floor-Cleaning Robots. Adv. Intell. Syst. 2023, 5, 2300260. [Google Scholar] [CrossRef]
  27. Muthugala, M.V.J.; Samarakoon, S.B.P.; Elara, M.R. Design by robot: A human-robot collaborative framework for improving productivity of a floor cleaning robot. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; pp. 7444–7450. [Google Scholar]
  28. Verne, G.B. Adapting to a robot: Adapting gardening and the garden to fit a robot lawn mower. In Proceedings of the Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 34–42. [Google Scholar]
  29. Yeo, M.S.; Samarakoon, S.B.P.; Ng, Q.B.; Ng, Y.J.; Muthugala, M.V.J.; Elara, M.R.; Yeong, R.W. Robot-inclusive false ceiling design guidelines. Buildings 2021, 11, 600. [Google Scholar] [CrossRef]
  30. Elara, M.R.; Rojas, N.; Chua, A. Design principles for robot inclusive spaces: A case study with Roomba. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 5593–5599. [Google Scholar]
  31. Yeo, M.S.; Samarakoon, S.B.P.; Ng, Q.B.; Muthugala, M.V.J.; Elara, M.R. Design of robot-inclusive vertical green landscape. Buildings 2021, 11, 203. [Google Scholar] [CrossRef]
  32. Mohan, R.E.; Tan, N.; Tjoelsen, K.; Sosa, R. Designing the robot inclusive space challenge. Digit. Commun. Netw. 2015, 1, 267–274. [Google Scholar] [CrossRef]
  33. Sandoval, E.B.; Sosa, R.; Montiel, M. Robot-Ergonomics: A proposal for a framework in HRI. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 233–234. [Google Scholar]
  34. Chakraborty, N.; Ghosal, A. Kinematics of wheeled mobile robots on uneven terrain. Mech. Mach. Theory 2004, 39, 1273–1287. [Google Scholar] [CrossRef]
  35. Belaidi, H.; Bentarzi, H.; Belaidi, A.; Hentout, A. Terrain traversability and optimal path planning in 3D uneven environment for an autonomous mobile robot. Arab. J. Sci. Eng. 2014, 39, 8371–8381. [Google Scholar] [CrossRef]
  36. Smieszek, M.; Dobrzanska, M.; Dobrzanski, P. The impact of load on the wheel rolling radius and slip in a small mobile platform. Auton. Robot. 2019, 43, 2095–2109. [Google Scholar] [CrossRef]
  37. Dewi, D.A.; Sundararajan, E.; Prabuwono, A.S.; Cheng, L.M. Object detection without color feature: Case study Autonomous Robot. Int. J. Mech. Eng. Robot. Res. 2019, 8, 646–650. [Google Scholar] [CrossRef]
  38. Wang, C.; Wang, J.; Li, C.; Ho, D.; Cheng, J.; Yan, T.; Meng, L.; Meng, M.Q.H. Safe and Robust Mobile Robot Navigation in Uneven Indoor Environments. Sensors 2019, 19, 2993. [Google Scholar] [CrossRef]
  39. Yeong, D.J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and sensor fusion technology in autonomous vehicles: A review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
  40. Chen, Z.; Chen, X.; Li, C.; Sanchez, R.V.; Qin, H. Vibration-based gearbox fault diagnosis using deep neural networks. J. Vibroeng. 2017, 19, 2475–2496. [Google Scholar] [CrossRef]
  41. Tee, Y.K.; Han, Y.C. Lidar-based 2D SLAM for mobile robot in an indoor environment: A review. In Proceedings of the 2021 International Conference on Green Energy, Computing and Sustainable Technology (GECOST), Miri, Malaysia, 7–9 July 2021; pp. 1–7. [Google Scholar]
  42. Pol, R.S.; Aher, V.N.; Gaikwad, S.V.; Bhalke, D.G.; Borkar, A.Y.; Kolte, M.T. Autonomous Differential Drive Mobile Robot Navigation with SLAM, AMCL using ROS. Int. J. Intell. Syst. Appl. Eng. 2024, 12, 46–53. [Google Scholar]
Figure 1. (a) Localization drift caused by an uneven surface and the robot following an incorrect path; (b) Map boundary distortion caused by an uneven surface.
Figure 1. (a) Localization drift caused by an uneven surface and the robot following an incorrect path; (b) Map boundary distortion caused by an uneven surface.
Applsci 14 07831 g001
Figure 2. Meerkat audit robot.
Figure 2. Meerkat audit robot.
Applsci 14 07831 g002
Figure 3. System architecture.
Figure 3. System architecture.
Applsci 14 07831 g003
Figure 4. (a) A 2D LiDAR map of the study area and path taken by the audit robot; (b) recorded unevenness variation as a heatmap; (c) resultant RIL-SU heatmap for the site.
Figure 4. (a) A 2D LiDAR map of the study area and path taken by the audit robot; (b) recorded unevenness variation as a heatmap; (c) resultant RIL-SU heatmap for the site.
Applsci 14 07831 g004
Figure 5. Robot navigation performance at different zones in the study area during the validation phase. (a) Location 1, (b) Location 2, and (c) Location 3.
Figure 5. Robot navigation performance at different zones in the study area during the validation phase. (a) Location 1, (b) Location 2, and (c) Location 3.
Applsci 14 07831 g005
Figure 6. (a) A 2D LiDAR map of the printing room and the path taken by the audit robot; (b) surface map of the printing room; (c) RII-SU colour map of the printing room.
Figure 6. (a) A 2D LiDAR map of the printing room and the path taken by the audit robot; (b) surface map of the printing room; (c) RII-SU colour map of the printing room.
Applsci 14 07831 g006
Figure 7. Robot navigation performance in different zones in the printing room during the validation phase. (a) Location 4, (b) Location 5, and (c) Location 6.
Figure 7. Robot navigation performance in different zones in the printing room during the validation phase. (a) Location 4, (b) Location 5, and (c) Location 6.
Applsci 14 07831 g007
Figure 8. (a) A 2D LiDAR map of the connector bridge area and the path taken by the audit robot; (b) surface map of connector bridge area; (c) RII-SU colour map of connector bridge area.
Figure 8. (a) A 2D LiDAR map of the connector bridge area and the path taken by the audit robot; (b) surface map of connector bridge area; (c) RII-SU colour map of connector bridge area.
Applsci 14 07831 g008
Figure 9. Robot navigation performance in different zones in the connector bridge area during the validation phase. (a) Location 7, (b) Location 8, and (c) Location 9.
Figure 9. Robot navigation performance in different zones in the connector bridge area during the validation phase. (a) Location 7, (b) Location 8, and (c) Location 9.
Applsci 14 07831 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Borusu, C.S.C.S.; Yeo, M.S.K.; Zeng, Z.; Muthugala, M.A.V.J.; Budig, M.; Elara, M.R.; Wang, Y. Evaluating the Robot Inclusivity of Buildings Based on Surface Unevenness. Appl. Sci. 2024, 14, 7831. https://doi.org/10.3390/app14177831

AMA Style

Borusu CSCS, Yeo MSK, Zeng Z, Muthugala MAVJ, Budig M, Elara MR, Wang Y. Evaluating the Robot Inclusivity of Buildings Based on Surface Unevenness. Applied Sciences. 2024; 14(17):7831. https://doi.org/10.3390/app14177831

Chicago/Turabian Style

Borusu, Charan Satya Chandra Sairam, Matthew S. K. Yeo, Zimou Zeng, M. A. Viraj J. Muthugala, Michael Budig, Mohan Rajesh Elara, and Yixiao Wang. 2024. "Evaluating the Robot Inclusivity of Buildings Based on Surface Unevenness" Applied Sciences 14, no. 17: 7831. https://doi.org/10.3390/app14177831

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop