Next Article in Journal
AEPF: Attention-Enabled Point Fusion for 3D Object Detection
Previous Article in Journal
Evaluation of Smartphone Technology on Spatiotemporal Gait in Older and Diseased Adult Populations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Implementing Autonomous Control in the Digital-Twins-Based Internet of Robotic Things for Remote Patient Monitoring

1
Department of CS and IT, University of Malakand, Chakdara 18800, Pakistan
2
Department of Software Engineering, University of Malakand, Chakdara 18800, Pakistan
3
Department of Health Informatics, College of Applied Medical Sciences, Qassim University, Buraydah 51452, Saudi Arabia
4
Department of Computer Science, Texas Tech University, Lubbock, TX 79409, USA
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(17), 5840; https://doi.org/10.3390/s24175840
Submission received: 7 July 2024 / Revised: 26 August 2024 / Accepted: 30 August 2024 / Published: 9 September 2024
(This article belongs to the Section Internet of Things)

Abstract

:
Conventional patient monitoring methods require skin-to-skin contact, continuous observation, and long working shifts, causing physical and mental stress for medical professionals. Remote patient monitoring (RPM) assists healthcare workers in monitoring patients distantly using various wearable sensors, reducing stress and infection risk. RPM can be enabled by using the Digital Twins (DTs)-based Internet of Robotic Things (IoRT) that merges robotics with the Internet of Things (IoT) and creates a virtual twin (VT) that acquires sensor data from the physical twin (PT) during operation to reflect its behavior. However, manual navigation of PT causes cognitive fatigue for the operator, affecting trust dynamics, satisfaction, and task performance. Also, operating manual systems requires proper training and long-term experience. This research implements autonomous control in the DTs-based IoRT to remotely monitor patients with chronic or contagious diseases. This work extends our previous paper that required the user to manually operate the PT using its VT to collect patient data for medical inspection. The proposed decision-making algorithm enables the PT to autonomously navigate towards the patient’s room, collect and transmit health data, and return to the base station while avoiding various obstacles. Rather than manually navigating, the medical personnel direct the PT to a specific target position using the Menu buttons. The medical staff can monitor the PT and the received sensor information in the pre-built virtual environment (VE). Based on the operator’s preference, manual control of the PT is also achievable. The experimental outcomes and comparative analysis verify the efficiency of the proposed system.

1. Introduction

In conventional patient monitoring methods, medical personnel keep manual records and continuously monitor patients’ health. Hospitals have limited resources, thus manually taking patients’ vital signs depends on many factors, including clinical workload, staff working hours, and patient diagnosis [1]. Furthermore, invasive devices are used for patient monitoring, which requires skin-to-skin contact to estimate vital signs [2]. This raises the possibility of exposing medical personnel to infectious diseases because of their asymptomatic nature and high transmission rate [3]. Healthcare practitioners not only undergo the common fear of being infected but also deal with other stresses, such as fear about the safety of their families, and the deaths of colleagues [4]. Remote patient monitoring (RPM) can complement conventional treatment and provide an alternative that benefits patients’ and care providers’ social and financial well-being [5]. RPM gathers and transmits patient health data to medical professionals using various digital health technologies [6]. The data acquired by RPM areassessed by healthcare practitioners to implement modifications in patients’ treatment procedures [7]. It is an essential tool for health carers to monitor and treat infected patients and those with chronic conditions [8]. It improves patient management and care capacities by enabling health professionals to spot diseases earlier and remotely examine chronic or contaminated patients, and those recovering from surgeries inside the hospitals or at home [2,9]. However, implementing RPM is a complicated and challenging task [10]. Many innovative technologies have been used to design and implement frameworks for RPM [11].
The Digital-Twins (DTs)-based Internet of Robotic Things (IoRT) [12] is the best candidate for RPM. The DTs-based IoRT integrates the Internet of Things (IoT) and robotics and creates a virtual replica [virtual twin (VT)] of the physical robotic thing [physical twin (PT)] that receives real-time data from the PT to update the status of VT in the virtual environment (VE) [13]. It combines virtual and physical spaces, allowing synchronized operation of the physical and virtual entities [14]. When the physical twin (PT) is altered, the virtual twin (VT) is updated automatically to reflect the same changes [15]. This framework consists of three main components, i.e., physical object, virtual replica, and bi-directional data connection between them. Integrating the physical and virtual entities enables the virtual model to accurately mimic the physical system, allowing the physical object to adjust its status using the feedback from the virtual entity when needed [16,17].Monitoring patients remotely through robotic systems is essential to medical care [18]. It saves significant time and notifies physicians earlier during the emergency to save the patient’s life [19]. Robotic things (RTs) can autonomously act or react to changes occurring in their surroundings [13]. They can potentially combat infectious diseases because they resist microbes and can arrive at places independently where human access is nearly impossible or dangerous [20,21]. They can be used in hospitals for disinfecting surfaces [22], mopping and cleaning [23], and drug distribution [24]. Combining robots with IoT devices and sensors provides real-time information regarding patient health and reduces the risk of human errors in prescribing medication and procedures [25]. It enhances the capacity of IoT through active sensing and actuation via robotic devices/things [26].
However, manually navigating the robotic things is a time-consuming and complex task. It causes extra cognitive fatigue for the operator because of increased task complexity or demand for additional situational awareness, affecting trust dynamics, complacency, and job performance. Also, operating robotic systems requires proper training and long-time experience.
Autonomous robotic systems can efficiently deal with these issues. They can enable the treatment and monitoring of patients with chronic and infectious diseases, replacing or distributing the responsibilities of the healthcare workers performing complex tasks, and improving the overall medical care services [27]. Also, they can traverse a path towards their destination without encountering obstructions in the physical environment [28], eliminating the workload and fatigue experienced by medical staff during manual navigation.
This paper presents a real-time RPM system that implements autonomous control in the DTs-based IoRT. The proposed work is an extension of our previous published study [12], in which the medical staff had to manually navigate the PT to the destination by operating its VT to collect health data from the biomedical sensors of the patient. A pre-built virtual environment (VE) was used to provide an accurate perception of the PT’s surroundings and visualize the collected health data for medical inspection.
The proposed system enables the PT to arrive at the patient room autonomously, collect health-related data from the patient-mounted sensors, transmit the information to the base station for medical examination, and return to the medical service. The developed decision-making algorithm can efficiently navigate the PT while evading various obstacles. Rather than manually navigating the PT to the target, the operator only has to initiate it towards a specific destination (patient room) by selecting one of the buttons (Room 1, Room 2, Room 3, Room 4) from the Menu. The healthcare workers can observe the PT and the health information of the infected patient in the VE. Switching from autonomous to manual mode is achievable depending on the user’s preference.

1.1. Motivation

Currently, due to rising medical complications, population growth, and various pandemics, robotic-based RPM has become increasingly important. Monitoring patients remotely using robotic devices saves significant time and notifies physicians earlier during the emergency to save the patient’s life. They can combat infectious diseases since they are immune to microbes and can travel to areas where human access is either difficult or dangerous. However, the manual operation of robotic systems causes cognitive fatigue for the operators because of increased task complexity or demand for additional situational awareness, affecting trust dynamics, complacency, and job performance [29]. Also, operating the robotic systems requires proper training and long-term experience [30]. This study aims to safeguard the robot operators from the metal fatigue they encounter while navigating robotic devices manually. Enabling the RTs to navigate autonomously for RPM can resolve these issues.

1.2. Contribution

This research implements autonomous control in the DTs-based IoRT to remotely monitor patients with chronic or infectious diseases. The main contributions are:
  • Our proposed system enables the operator to remotely monitor the PT’s autonomous navigation in the VE and switch to manual control when needed.
  • We developed and implemented a decision-making algorithm for autonomous navigation and an obstacle avoidance mechanism that uses various geometrical patterns to evade obstructions for path recalculation.
  • We analyzed the performance of PT’s navigation and patient monitoring setup and performed a comparative analysis of the RPM systems and virtual reality (VR) and DTs frameworks.
The remaining part of thispaper is structured as follows: Section 2 outlines the related work, Section 3 discusses the proposed work, Section 4 describes the experimental setup and performance evaluation, Section 5 presents the discussion, and Section 6 presents the conclusion and future work.

2. Related Work

This section provides a summary of the research studies relevant to our topic. In this era of pandemics and increased medical complications, robot-based RPM is an essential tool for healthcare practitioners to treat infected or chronic patients. In this regard, we reviewed the research publications that used robotic devices for RPM. The RPM systems described in the following section are based on robots that can be operated manually or autonomously.
Khan et al. [31] developed a manually navigating mobile Robot Doctor (RoboDoc) that measures vital signs to remotely monitor the health of COVID-19 patients without directly interacting with them. A desktop-based GUI is developed to visualize vital signs for monitoring. A tablet with a mobile application is used to enable conversation between patients and medical staff. The robot includes a DSLR camera connected to a Raspberry Pi 4 that communicates with the server via Wi-Fi.
Mamun et al. [32] updated their iWard robot for remotely monitoring hospitalized patients’ health conditions. The robot collects and analyzes the vital signs to monitor the patient’s physical condition. The health information is then transmitted to a server for storage and examination by healthcare professionals. Each patient is assigned a unique identity to differentiate between their records.
Mišeikis et al. [33] adjusted their autonomous assisting robot (Lio) to perform additional functionalities during the COVID-19 pandemic. Lio can monitor the body temperature of passers-by and perform disinfection of surfaces using UV-C light. The system uses a thermal camera to measure the body temperature remotely. In case of elevated temperature, the medical staff is notified to collect manual data of the suspected persons using a standard medicinal thermometer. Lio has a powerful onboard computing unit; thus, data processing is performed without transmitting the data to cloud services.
Cantone et al. [34] presented an IoRT system to remotely monitor the health conditions of elderly people, transmitting their vital signs to medical assistants and physicians to ensure appropriate care. Conversation between elderly people, physicians, and care providers is allowed through an external Telegram boat. The communication is provided using Wi-Fi, IEEE 802.22 [35], and Ethernet.
Rai et al. [36] designed and developed an autonomous virtual doctor robot (VDR) that can distantly monitor patients with COVID-19 without any physical contact. The VDR collects the patient’s vital signs using various sensors and transmits them to the medical practitioners via a Wi-Fi network. A mobile application (Blynk app) is developed that allows doctors to interact with the robot and receive health-related information over the Internet cloud.
Mireles et al. [37] proposed a nursing robot that can remotely monitor a patient’s vital signs (oxygen level, heart rate, blood pressure, and temperature). The robotic device can also assist people in the gait cycle. A graphical user interface (GUI) is used to present the data collected and processed by the system. Furthermore, the GUI is used for interaction between the robot, patients, and doctors. The ZigBee modules are used for communication between various entities of the system.
Some researchers proposed a RPM system based on a robotic device that can work in both autonomous and manual modes.
Antony et al. [38] developed a medicine delivery and RPM robot capable of navigating in autonomous and manual modes. An infrared sensor is attached to the bottom of the robot to identify the path, and an ultrasonic sensor is mounted to the front to detect the obstacles. The collected parameters (temperature, pulse rate) are sent to the doctor through the Internet for inspection. An android application is developed that communicates with the robot via Bluetooth device (HC-05) to receive the parameters.
Interaction and visualization are the key components for remote operations of robotic systems. We reviewed the related articles that used VR and DTs-based interfacing and supervision techniques to facilitate different aspects of life.
Some researchers leveraged the advantages of VR-based techniques to remotely operate a mobile robot.
Solanes et al. [39] developed a VR-based system for remotely controlling a mobile robot that accomplishes various tasks including, bomb disposal, and human rescue.The VR interface allows the user to have a better view of the physical scenario and real-time human–robot interaction. The human operator directs the robot through the environment using its intellectual abilities, while the robot avoids collisions in space to take advantage of its quick response. The VE presents various elements needed for remote operation such as user reference, mobile robot, two-dimensional (2D) map of the environment, information regarding the task or robot, and the position of the real-time detected objects. A gamepad is used to interact with the VE for a longer time.
Many researchers integrated VR and DT approaches to monitor and control robots performing various tasks.
Topini et al. [40] designed a robotic hand exoskeleton for rehabilitation based on integrating VR and DTs. The predefined VE is developed using the Webot framework. Mapping between the DTs of the hand exoskeleton is performed to mimic the status of the physical twin in the VE and enable the patient to physically observe the virtual objects on hand via force feedback. As long as the virtual hand is not in contact with a barrier, the system remains in a free-motion state. When the virtual model encounters an obstruction, the virtual force values are sent to the physical hand exoskeletonas a reference signal for force feedback.
Kalinov et al. [41] developed a VR-based user interface for supervising an autonomous robot that transports stock in a warehouse to avoid the spread of COVID-19 between people. The virtual interface allows inexperienced warehouse workers to operate the heterogeneous robotic framework comprising an unmanned ground vehicle (UGV) and an unmanned aerial vehicle (UAV). It visualizes the virtual model of the system in the DT of the warehouse’s physical environment and a live view of real space from the onboard camera of the aerial vehicle. The user can interact with the interface via a hand-held VR device.
Ponomareva et al. [42] presented a robot manipulation system based on a VR interface. The region-based convolutional neural network detects the laboratory instruments and calculates their location in real space for visualization in the VE. The DT informs the operator about the position of the real robot. The VE represents the robot’s current status from different viewpoints without using complex camera-based techniques. The haptic device’s handle is used to direct the robot’s motion. At the end of the robot’s gripper, a visual camera is mounted to assist in the manipulation process.
Grag et al. [43] created a DT model of an industrial robot that provides synchronized control of the physical robot over a specified trajectory. This framework only supports FANUC robots and communicates over client/server architecture. A VE is created to exhibit the DT and provide natural interaction with the DT and the physical robot. A VR device is used to control both the virtual and physical robots’ movements.
Taheema et al. [44] developed a DT model that can be used to control the industrial manufacturing system in real time. A VE is created to display the robotic system’s DT. The system displays the robotic cells and production lines and controls the entire process.
Laaki et al. [45] created a DT of the robotic arm intended to perform remote surgery. The DT and the real robot are mapped for synchronized operation. The DT is displayed in a virtual environment (VE) designed to provide an immersive experience. The virtual space resembled a medical setup and included a virtual model of a dummy patient. The movement of the head-mounted display (HMD) and handheld controllers is tracked using an infrared laser grid.
According to the literature review, DTs-based robots and VR have been used in industrial manufacturing, inspection tasks, patient rehabilitation, remote monitoring of autonomous robots, and remote surgeries. However, none of the existing systems has its applications RPM. Also, they can only monitor or control a specific entity. They cannot connect to external sensors or devices for data acquisition or transmission. In robot monitoring, autonomous robot supervision has been suggested in only one paper [41]. However, the study lacks navigation performance evaluation and comparison with the existing studies.
Most existing works on robot-based RPM do not include evaluation protocols to measure the robot’s navigation accuracy. They lack measuring DQ dimensions for the monitoring data. Furthermore, they do not present a comparative analysis to verify the efficiency of the proposed frameworks over the existing systems.
The proposed system implements autonomous control in the DTs-based IoRT to remotely monitor patients with chronic or contagious diseases. The system enables monitoring of the autonomous robotic device in VE and allows its connections with external sensors and devices for RPM. Unlike the existing frameworks, the proposed approach includes an evaluation protocol to calculate navigation accuracy. It also measures the DQ dimensions of the monitoring data. It presents a comparison of the existing and the proposed approaches in navigation and DQ dimensions. It also provides a comparison of VR and DTs-based robotic systems. Furthermore, the framework allows manual control of the PT based on the user’s preference.

3. The Proposed System

This proposed system implements autonomous control in the DTs-based IoRT to remotely monitor patients with chronic or contagious diseases. The system’s components and operation are discussed in the following part. The framework comprises physical and virtual twins (DTs) connected through a communication link. The VT is visualized in the VE to depict the status of the PT in real time, as shown in Figure 1.
The data acquisition from the patient monitoring unit (PMU) is carried out using Bluetooth modules (HC-05). The radio transceivers (NRF24L01+) are employed for communication between the PT and the health service. A laptop PC is used for virtual renderings, data processing, and visualization.
The PT navigates autonomously in the RE for RPM. After arriving at the patient’s room, it connects with PMU and receives sensor information for 10 s. The health parameters are then transmitted to the medical service and displayed in the VE for medical inspection. The VT in the virtual space is used to monitor the PT operating in the actual environment. Because of the autonomous nature of the proposed system, medical staff is relieved from the manual controlling load and mental fatigue. The operator only has to monitor the VT in the virtual space to observe the PT’s actual status in the RE. Manual control of the PT depends on the operator’s preference. The proposed system diagram is shown in Figure 2, which includes all the main elements of the virtual setup, the physical robotic device, and the patient monitoring setup.
The key components that should be considered while designing any autonomous robotic system are locomotion, perception, cognition, and navigation, as described in [46,47]. The proposed system is built on the four important elements used for autonomous navigation.

3.1. Locomotion

The first step in building an autonomous robot is locomotion. Even though robots often move in safe and controlled areas, they must occasionally traverse in extreme or unknown settings. The robots may also function in other environments, such as air or water. Autonomous robots rely not merely on control but also on locomotion systems. Locomotion is an important subject for creating autonomous robotic systems, and it is dependent not only on the physical environment in which the robots navigate, but also on various technological factors such as maneuverability, stability, and efficiency. The proposed system leverages a four-wheeled mobile robotic thing (PT) that moves through the corridor autonomously, collecting health parameters from biomedical sensors and transmitting them to a medical service for examination. The PT moves inside the indoor space on a flat surface, hence a wheeled robot is preferable to a legged or treaded device. Wheeled robots do not pose balance issues because they are generally in contact with the ground surface.

3.2. Perception

Navigating autonomous mobile robots requires vital information regarding the surrounding environment and the robot itself. This is achieved through the robot’s onboard sensors, subsequently, relevant information is extracted from the sensors’ calculations. The sensor data areused to conduct the robot positioning, representation, and mapping tasks. Currently, a number of sensors are available that enable robotic devices to know about the surroundings and activities around them, such as RGBD Cameras, LiDARs, and Sonars. However, selecting one or many from these is wide, and based on the specific requirements [48]. The proposed system’s PT consists of various sensors to gather real-time information concerning the navigation and position of the PT. A speed sensor (LM393) is used whose values are utilized to calculate the distance covered by the PT in the RE. Three ultrasonic distance sensors (HC-SR04) are fitted with the robot for obstacle detection and avoidance, providing the user with situational awareness. The sensors are mounted on the left, right, and front sides of the PT. An accelerometer/gyroscope sensor (MPU6050) is installed to calculate the robot’s direction in the RE.

3.3. Cognition

Once the environment knowledge and the robot’s direction and destination are known, the cognitive system plans the path to achieve the objectives. Hence, the cognitive phase is referred to as the decision-making and implementation phase. Using the sensor information, the cognition system decides the next action to achieve the goal. In the context of a mobile robot, the exact aspects of the cognition phase are directly connected with the robot’s robust navigation. The following section explains the proposed system’s obstacle avoidance and path calculation criteria.

3.3.1. Obstacle Avoidance

The ultrasonic distance sensors’ (HC-SR04) values are used to detect, visualize, and avoid both static and dynamic obstructions in real time. When the sensor detects an obstruction closer to the PT than the threshold distance (0.4 m), it is visualized as a cube to provide the user with situational awareness. The distance values are also displayed in the virtual space, along with their labels (Left, Right, and Front) to provide more information about the detected obstacle. If there are no obstacles, the PT has to travel the straight path towards the target position (TP) and then back to the health center. If the sensors detect obstructions within the threshold distance, the decision-making algorithm analyzes them to ensure safe navigation. The barriers to the left, right, or left and right are ignored by the PT as they do not affect its movement towards the TP. If the barrier is to the front side, the PT stops moving, turns towards the right (45°), and moves forward (0.5 m) to avoid the obstacle. After evading the barrier, it turns towards the left (90°) and moves forward (0.5 m). Finally, it takes a right turn (45°) to align with the straight path to reach the destination. Similarly, if the barriers are to the front and left sides, the PT stops moving, turns towards the right (45°), and moves forward (0.5 m) to avoid the obstacles. After evading the barriers, it turns towards the left (90°) and moves forward (0.5 m). Finally, it takes a right turn (45°) to align with the straight path to reach the destination. If the obstacles are to the front and right sides, the PT stops moving, turns towards the left (45°), and moves forward (0.5 m) to avoid the obstacles. After evading the obstructions, it turns towards the right (90°), and moves forward (0.5 m). Finally, it takes a left turn (45°) to align with the straight path to reach the desired TP. However, if the obstructions are to the front, left, and right sides, the PT stops moving, moves backward (0.5 m), turns towards the right (90°), and moves forward (0.7 m). Then, it turns left (90°) and moves forward (1 m) to avoid the obstacles. After evading the obstructions, it turns towards the right (45°) and moves forward (0.8 m). Finally, it takes a right turn (45°) to align with the straight path to reach the particular TP.

3.3.2. Path Calculation

During obstacle avoidance, the PT covers an additional distance and stops before reaching the predefined destination. Therefore, the path is recalculated to enable accurate navigation. The proposed system uses various patterns and functions to recalculate the path.
  • Case 1:
If the obstacles are to the front, front and left, and front and right, then an isosceles right-angled triangle (∆ABC) is formed whose two sides AB and BC are known and the third side (AC) is unknown, as shown in Figure 3a,b.
Here, we used Pythagoras’ theorem to measure the correct distance. According to this theorem (Equation (1)), the sum of the square of the hypotenuse is equal to the sum of squares of the other two sides, i.e., base and perpendicular.
(Hypotenuse)2 = (Base)2 + (Perpendicular)2
(AC)2 = (AB)2 + (BC)2
In an isosceles right-angled triangle, both sides (AB and BC) have the same length (I).
So, (AC) 2 = (I)2 + (I)2
(AC) 2 = 2I2
AC = 2 I 2
After calculating the straight distance (Equation (2)) as shown in Figure 3c,d, the difference (s) between the 2I (base and perpendicular) and AC (hypotenuse) is measured to recalculate the path using Equation (3).
s = 2I − AC
Finally, sis added with the traveled distance to attain the exact target.
  • Case 2:
If the obstructions are to the front and left and right sides of the PT, then the OAM creates a right trapezoid (ABCD), as shown in Figure 4a. Where AB is the long base with a missing part AY, BC and DA are the legs, and CD is the short base, to find AY, we create a rectangle (XBCD) and a right triangle (AXD), as shown in Figure 4b. In the rectangle XBCD, XB = CD = 1 m, and BC = XD = 0.7 m. So, XY = XB − YB. Now, to find AX in the right triangle (AXD), we used Pythagoras’ theorem, i.e., (Hypotenuse)2 = (Base)2 + (Perpendicular)2. (DA)2 = (AX)2 + (XD)2 or (AX)2 = (DA)2 − (XD)2 or AX = ( DA ) 2 ( XD ) 2 . Now, AY = AX + XY, and YBCDA = YB + BC + CD + DA, as shown in Figure 4c. To recalculate the path, we find the difference (f) between YBCDA and AY. So, f = YBCDA−AY. Finally, f is added to the traveled distance to attain the exact target.

3.4. Navigation

The main aspect of designing a mobile robot is the navigational ability. The objective of navigation is to move from the starting point to the destination in a familiar or unfamiliar environment using the sensor values to perform a particular task. Mobile robots rely on various factors, including perception, localization, cognition, and control to achieve a specific goal. It becomes vital to provide the robot with useful information regarding its position to enable safe navigation.
The suggested framework functions in the indoor settings, GPS technology cannot be used for PT’s accurate location estimation. Consequently, we leveraged Perception-based positioning [49] which analyzes the distance, angle, and velocity to determine the robot’s current position. The location and direction of a mobile robotic device rely on identifying the target points. Autonomous robots collect information from encoders, odometers, infrared and ultrasonic sensors for position-based navigation. The robot’s location is confirmed by the sensor data, and position is determined by matching location parameters to a specific value [50].

3.5. Implementation

The proposed navigation technique acquires different values from the sensors (LM393, MPU6050, and HC-SR04) which are checked by the decision-making algorithm, for navigation, as shown in Figure 5. The functions and keywords for the algorithm are shown in Table 1.
To perform remote monitoring of the patient, the system is initialized. After initialization, the PT waits for the user command.The operator issues the command by using the menu containing different buttons with labels, i.e., “Room 1” for target position 1 (TP1), “Room 2” for target position 2 (TP2), “Room 3” for target position 3 (TP3), “Room 4” for target position 4 (TP4), and “Stop” to halt PT’s motion and switch the driving mode to manual. When input parameters are received by the PT, they are evaluated by the algorithm. If the values are TP1,TP2, TP3, and TP4, the PT navigates in autonomous mode otherwise in manual mode. In autonomous mode, the PT begins to move forward towards the specific target based on the input value. During navigation, the ultrasonic sensors keep scanning for obstacles. If there are no obstructions, the PT moves straight towards the specific target position (TP). The accelerometer/gyroscope sensor (MPU6050) calculates the robot’s direction in the RE. The sensor values are utilized to align the PT on the straight path. If the rotation angle exceeds the predefined threshold (5°) in a particular direction, the PT is rotated in the opposite direction to align with the straight path. If there are barriers, the system uses the OAM to evade the obstacles, as described in Section 3.3.1. On reaching the destination, the PT stops moving and waits 10 s to collect and transmit health data. Then, it takes a turn (180°) and moves forward. The ultrasonic sensors continue scanning and if obstructions are detected, the OAM is executed to ensure safe navigation. On reaching the starting point, the PT stops, takes a turn (180°), and comes to a halt.
The health service can take control of the PT at any instant. When the operator issues the “Stop” command, the control is switched to the manual driving mode and the DTs stop moving. In manual mode, the user navigates the PT using the laptop’s arrow keys. If there are no obstructions, the PT moves straight to its target. If obstacles are detected by the PT, they are visualized in the VE. The operator observes the PT in the VE and avoids the barriers using the arrow keys. The “Up” arrow key is used to move forward; the “Down” arrow key is used to move backward; the “Left” key is used to turn left; and the “Right” key is used to turn right.

4. Experimental Setup and Performance Evaluation

The experimentation was carried out at the CS&IT department, University of Malakand. The aim was to move the PT to the desired destination autonomously by monitoring its VT in the VE and acquiring patient health information for transmission to the control station. The experimental environment is shown in Figure 6. Different target positions were specified to analyze the performance of the proposed system.
Target position 1(TP1):The health monitoring setup was attached to the human subject inside Room 4 (19 m from the starting point).
Target position 2 (TP2):The monitoring unit was mounted to the human subject inside Room 3 (15.62 m from the starting point).
Target position 3 (TP3):The patient monitoring setup was attached to the human subject inside Room 2 (12.24 m from the starting point).
Target position 4 (TP4):The health monitoring setup was attached to the human subject inside Room 1 (8.86 m from the starting point).
The proposed system’s performance was assessed by measuring navigation accuracy and monitoring data quality (DQ). Navigation accuracy was measured using the error, i.e., the difference between the actual distance and the distance covered by the PT. The health data wereevaluated by using the three well-known DQ dimensions (accuracy, completeness, and timeliness) [51]. The experimentation consisted of nine tasks (Task 1, Task 2, Task 3, Task 4, Task 5, Task 6, Task 7, Task 8, and Task 9). The tasks were classified into two categories. Category (Cat) 1, consisted of the tasks (1, 2, 3, and 4) classified according to the distance from the starting point. Category (Cat) 2 consisted of the tasks (5, 6, 7, 8, 9) classified based on various obstacles at different positions. Each task was performed 10 times, resulting 90 trials. The values for each task were averaged to obtainthe final result.
  • Cat 1:
Task 1: The PT had to reach TP1 and return to the control center after receiving the health data from the medical sensors.
Task 2: The PT had to reach TP2 and return to the control center after receiving health data from the medical sensors.
Task 3: The PT had to reach TP3 and return to the control center after receiving health data from the medical sensors.
Task 4: The PT had to reach TP4 and return to the control center after receiving health data from the medical sensors.
  • Cat 2:
Task 5: The PT had to reach TP1, avoiding a static obstacle (front) placed at TP3, and return to the control center after receiving health data from the medical sensors.
Task 6: The PT had to reach TP1, avoiding two static obstacles (front, left) placed at TP3, and return to the control center after receiving health data from the medical sensors.
Task 7: The PT had to reach TP1, avoiding two static obstacles (front, right) placed at TP3, and return to the control center after receiving health data from the medical sensors.
Task 8: The PT had to reach TP1, avoiding three static obstacles (front, right, left) placed at TP3, and return to the control center after receiving health data from the medical sensors.
Task 9: The PT had to reach TP1, avoiding a moving obstacle (front) initiated from TP3, and return to the control center after receiving sensor data from the medical sensors.

4.1. Navigation Accuracy

The proposed system’s navigation accuracy is presented in Table 2 and Table 3. Da is the actual distance in meters (m) from the starting point to the target position and then returning to the control station, and MDc is the mean distance covered by PT during trials. The mean error (ME), and standard deviation (SD) of 10 trials for each task in Cat 1 are presented in Table 4, and the chart is shown in Figure 7a, whereas the ME, and SD of tasks in Cat 2 are presented in Table 5, and the chart is shown in Figure 7b.
To measure the statistical difference between the groups, we employed analysis of variance (ANOVA) [52].
The ANOVA test results in Table 6 show a significant variation F(3, 36) = 53.19, p = 0.000 among the Means of errors of Cat 1.
However, the ANOVA test results in Table 7 show no significant variation F(4, 45) = 0.22, p = 0.927 among the Means of errors of Cat 2.

4.2. Monitoring Data Quality

The monitoring data include heart rate, oxygen level, and temperature measurements. The oxygen level and heartbeat sensors have integer parameters. In contrast, temperature sensor measurements are gathered as float values in Celsius. When the PT reaches a specific target point, it connects with the PMU using the Bluetooth module to collect health parameters. The PMU includes various sensors: A heartbeat and oxygen sensor (MAX30100), and a temperature sensor (DS-18B20). It collects various information using the sensors attached to the human body (BS student with normal health status) inside the room. The DS-18B20 sensor is mounted to the wrist, while the MAX30100 sensor is attached to the subject’s forefinger. The sensors are connected to the microcontroller board (Arduino) that analyzes the collected information. A Bluetooth module is attached to the Arduino to transmit the analyzed data from the PMU to the PT. The PMU can be powered with a battery or electricity. The PT is also equipped with a Bluetooth module. When the PT arrives at the destination, a Bluetooth connection is established for data communication. After collecting sensor data from the PMU for 10 s, the PT sends it to the base station using the NRF24L01+ communication module. The collection and transmission of data by the PT occur in real time. The received parameters are then saved as an excel file to determine the DQ dimensions. The DQ dimensions are measured based on installing the monitoring setup at the maximum and minimum distances for data acquisition.
The accuracy and completeness of the monitoring data are computed by using Equations (4) and (5), respectively, as described in [53].
Accuracy   =   1 r e r
where re is the number of erroneous data records and r is the total number of acquired data records.
Completeness   =   1 r c r
where rc is the number of not-null records and r is the total number of records received.
The timeliness of the health data is calculated using Equation (6) described in [53].
Timeliness   =   r o r
where ro is the number of data records acquired in a specified time interval and r is the total number of records in the same time slot.
Results of the DQ dimensions for Task 1, and Task 4 are shown in Table 8.

4.3. Comparative Analysis

The comparative analysis of the existing and the proposed systems in navigation is shown in Table 9. The accuracy of the proposed system is calculated by taking the mean of the tasks’(Task 1, 2, 3, 4, 5, 6, 7, 8, and 9) accuracies.
The comparative analysis of the DQ dimensions for the existing and proposed systems is shown in Table 10. The proposed system’s parameters are obtained by averaging the DQ dimensions of Tasks 1 and 4 of Table 8. The comparison of VR and DTs-based robotic systems is given in Table 11.

5. Discussion

The proposed approach creates a real-time RPM framework by implementing autonomous control of the PT in the DTs-based IoRT. The VE enables the operator to monitor the PT navigating autonomously towards the patient’s room for data collection. The designed algorithm can efficiently navigate the PT to the desired target location and return to the control center. The created OAM can detect and avoid obstacles in real time if they are present within a defined threshold distance, providing situational awareness to the operator by visualizing the obstacles, distance values, and distance labels (Left, Right, and Front) in the VE. It uses various geometrical patterns to recalculate the path and remove the distance errors. We evaluated the performance of OAM using the obstructions with predefined size (width = 20 cm, height = 35 cm). The barrier’s height has no impact on the system’s performance; however, the width value can affect the navigation accuracy. If the width of barriers is less than 20 cm, the PT can avoid them using the same functions. However, if the width exceeds 20 cm, the system should be re-programmed because it lacks dynamicity. Acquiring data from the monitoring unit for 10 s provides enough values to monitor the patient’s health. The navigation accuracy validates the PT’s efficiency. Task 4 in Cat 1, resulted in the highest accuracy values, i.e., 98.48 percent, while Task 1 resulted in the lowest accuracy score, i.e., 97.92 percent. Although the difference is less, the results show that short-distance tasks are more accurate than long-distance tasks. In Cat 2, Task 8 has relatively lower accuracy which means that increasing the number of barriers will slightly decrease the navigation accuracy. The ANOVA results indicate significant variation among the first experimental group (Cat 1). However, there is no significant variation among the second experimental group (Cat 2). In the evaluation of the DQ dimensions, it is found that the accuracy and completeness values are almost the same for both long and short-distance tasks (Task 1 and 4). However, the timeliness of data increases as distance increases.
The comparative analysis shows a clear advantage of the proposed system over the existing frameworks. In robot navigation, most systems lack evaluation protocols and navigation accuracy except the one developed in [33]. In RPM, only the framework proposed in [37] provides accuracy of monitoring data. However, it lacks the remaining two dimensions (completeness, and timeliness). The DTs and VR-based systems do not include a mechanism for connecting with external sensors or devices.
Unlike the existing systems, the proposed system provides a detailed evaluation protocol to calculate navigation accuracy and presents the monitoring DQ dimensions. Also, unlike the existing DTs and VR-based systems, the proposed framework can connect to external sensors and devices. Furthermore, our system is the sole one capable of monitoring the autonomous robotic device and remote patients.
The remaining part highlights the limitations of the proposed system to provide research directions for the new researchers. The proposed system uses a pre-built VE to visualize the PT’s virtual replica as well as its surroundings. However, if things change in real-time, e.g., objects added, removed, or displaced. It is very tricky to regularly update the virtual space according to the real scenario. The OAM’s performance has been examined for static-sized moving objects. However, no human subjects have been involved in experiments to assess the system’s performance at detecting and avoiding humans. Furthermore, the system does not include any mechanism to reflect the real scenario of patients.

6. Conclusions and Future Work

This research presented a real-time RPM system by implementing autonomous control inthe DTs-based IoRT. The pre-built virtual space presents the virtual replica of the PT and the RE. A Menu with several buttons is used to direct the PT to a specific location (patient room). A decision-making algorithm is developed that analyzes sensor data to enable safe navigation of the PT in the RE. The PT autonomously navigates to the patient room, collects and transmits health-related information, and returns to the health service. A real-time obstacle detection and avoidance mechanism is proposed that uses different geometrical patterns and mathematical formulae to evade obstructions and recalculate the path. The system allows the user to switch between the autonomous and manual driving modes. The experimental results and comparative analysis verify the proposed system’s advantage over the existing frameworks. The suggested system has an overall navigational accuracy of 97.81%, making it more effective compared to existing systems. In the context of DQ, the existing systems lack two dimensions: timeliness, and completeness. In data accuracy, our system has a clear advantage over the available frameworks showing an accuracy of 98.2%. In VR-based and DTs-based interfacing and supervision systems, the proposed system has a clear advantage over the other schemes as it includes autonomous control, which is missing in the majority of systems. Also, the proposed system provides connectivity with external sensors and detailed comparison, which the existing frameworks do not offer. In the future, the system will be upgraded to enable the PT to arrive at the patient room autonomously after a specific time interval rather than using the Menu for control. Machine learning techniques will be employed to predict the status of the patient and issue early warnings for prevention when anomalies in patient data are detected. Furthermore, the OAM will be improved to measure the obstacle’s size in real-time to enable dynamicity as it currently avoids barriers with predefined dimensions.

Author Contributions

Conceptualization, S.K. and S.U.; methodology, S.U.; formal analysis, K.U.; writing—review and editing, S.A. (Sulaiman Almutairi); validation, K.U.; software, S.K.; resources, S.U.; data curation, S.K.; writing—original draft preparation, S.K.; investigation, S.K.; writing—review and editing, S.A. (Sulaiman Aftan); supervision, S.U. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the King Salman center For Disability Research for funding this work through Research Group no. KSRG-2023-560.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The authors declare no concerns on data sharing.

Acknowledgments

The authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through a Large Research Project under grant number RGP 2/566/44.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Smith, G.B.; Recio-Saucedo, A.; Griffiths, P. The measurement frequency and completeness of vital signs in general hospital wards: An evidence free zone? Int. J. Nurs. Stud. 2017, 74, A1–A4. [Google Scholar] [CrossRef]
  2. Shaik, T.; Tao, X.; Higgins, N.; Li, L.; Gururajan, R.; Zhou, X.; Acharya, U.R. Remote patient monitoring using artificial intelligence: Current state, applications, and challenges. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2023, 13, e1485. [Google Scholar] [CrossRef]
  3. Leila, E.; Othman, S.B.; Sakli, H. An Internet of Robotic Things System for combating coronavirus disease pandemic (COVID-19). In Proceedings of the 2020 20th International Conference on Sciences and Techniques of Automatic Control and Computer Engineering (STA), Sfax, Tunisia, 20–22 December 2020; pp. 333–337. [Google Scholar]
  4. Wang, H.; Huang, D.; Huang, H.; Zhang, J.; Guo, L.; Liu, Y.; Ma, H.; Geng, Q. The psychological impact of COVID-19 pandemic on medical staff in Guangdong, China: A cross-sectional study. Psychol. Med. 2022, 52, 884–892. [Google Scholar] [CrossRef] [PubMed]
  5. Miranda, R.; Oliveira, M.D.; Nicola, P.; Baptista, F.M.; Albuquerque, I. Towards a framework for implementing remote patient monitoring from an integrated care perspective: A scoping review. Int. J. Health Policy Manag. 2023, 12, 7299. [Google Scholar] [CrossRef] [PubMed]
  6. Foster, C.; Schinasi, D.; Kan, K.; Macy, M.; Wheeler, D.; Curfman, A. Remote Monitoring of Patient-and Family-Generated Health Data in Pediatrics. Pediatrics 2022, 149, 54137. [Google Scholar] [CrossRef]
  7. Hayes, C.J.; Dawson, L.; McCoy, H.; Hernandez, M.; Andersen, J.; Ali, M.M.; Bogulski, C.A.; Eswaran, H. Utilization of remote patient monitoring within the United States health care system: A scoping review. Telemed. E-Health 2023, 29, 384–394. [Google Scholar] [CrossRef]
  8. Khan, M.A.; Din, I.U.; Kim, B.-S.; Almogren, A. Visualization of Remote Patient Monitoring System Based on Internet of Medical Things. Sustainability 2023, 15, 8120. [Google Scholar] [CrossRef]
  9. Farias, F.A.C.d.; Dagostini, C.M.; Bicca, Y.d.A.; Falavigna, V.F.; Falavigna, A. Remote patient monitoring: A systematic review. Telemed. E-Health 2020, 26, 576–583. [Google Scholar] [CrossRef]
  10. Hidefjäll, P.; Laurell, H.; Johansson, J.; Barlow, J. Institutional logics and the adoption and implementation of remote patient monitoring. Innovation 2023, 2162907. [Google Scholar] [CrossRef]
  11. Pradhan, B.; Bharti, D.; Chakravarty, S.; Ray, S.S.; Voinova, V.V.; Bonartsev, A.P.; Pal, K. Internet of things and robotics in transforming current-day healthcare services. J. Healthc. Eng. 2021, 2021, 9999504. [Google Scholar] [CrossRef]
  12. Khan, S.; Ullah, S.; Khan, H.U.; Rehman, I.U. Digital-Twins-Based Internet of Robotic Things for Remote Health Monitoring of COVID-19 Patients. IEEE Internet Things J. 2023, 10, 16087–16098. [Google Scholar] [CrossRef]
  13. Vermesan, O.; Bahr, R.; Ottella, M.; Serrano, M.; Karlsen, T.; Wahlstrøm, T.; Sand, H.E.; Ashwathnarayan, M.; Gamba, M.T. Internet of robotic things intelligent connectivity and platforms. Front. Robot. AI. 2020, 7, 104. [Google Scholar] [CrossRef]
  14. Haag, S.; Anderl, R. Digital twin–Proof of concept. Manuf. Lett. 2018, 15, 64–66. [Google Scholar] [CrossRef]
  15. da Silva Mendonça, R.; de Oliveira Lins, S.; de Bessa, I.V.; de Carvalho Ayres, F.A., Jr.; de Medeiros, R.L.P.; de Lucena, V.F., Jr. Digital twin applications: A survey of recent advances and challenges. Processes 2022, 10, 744. [Google Scholar] [CrossRef]
  16. Semeraro, C.; Olabi, A.; Aljaghoub, H.; Alami, A.H.; Al Radi, M.; Dassisti, M.; Abdelkareem, M.A. Digital twin application in energy storage: Trends and challenges. J. Energy Storage 2023, 58, 106347. [Google Scholar] [CrossRef]
  17. Bondarenko, O.; Fukuda, T. Development of a diesel engine’s digital twin for predicting propulsion system dynamics. Energy 2020, 196, 117126. [Google Scholar] [CrossRef]
  18. Mamun, K.; Sharma, A.; Hoque, A.; Szecsi, T. Remote patient physical condition monitoring service module for iWARD hospital robots. In Proceedings of the Asia-Pacific World Congress on Computer Science and Engineering, Nadi, Fiji, 4–5 November 2014; pp. 1–8. [Google Scholar]
  19. Shwetha, R.; Kirubanand, V. Remote monitoring of heart patients using robotic process automation (RPA). In Proceedings of the ITM Web of Conferences, Online, 27–29 January 2021; p. 01002. [Google Scholar]
  20. Arabi, Y.M.; Murthy, S.; Webb, S. COVID-19: A novel coronavirus and a novel challenge for critical care. Intensive Care Med. 2020, 46, 833–836. [Google Scholar] [CrossRef]
  21. Ruan, K.; Wu, Z.; Xu, Q. Smart cleaner: A new autonomous indoor disinfection robot for combating the COVID-19 pandemic. Robotics 2021, 10, 87. [Google Scholar] [CrossRef]
  22. Mohammadi, A.; Kucharski, A.; Rawashdeh, N. UVC and far-UVC light disinfection ground robot design for sterilizing the Coronavirus on vertical surfaces. In Proceedings of the Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, Orlando, FL, USA, 3–7 April 2022; pp. 57–63. [Google Scholar]
  23. Plunk, A.; Smith, J.; Strickland, D.; Erkal, C.; Sargolzaei, S. AutonoMop, Automating Mundane Work in the COVID-19 Era. In Proceedings of the SoutheastCon 2022, Mobile, AL, USA, 26 March–3 April 2022; pp. 626–630. [Google Scholar]
  24. Fanti, M.P.; Mangini, A.M.; Roccotelli, M.; Silvestri, B. Hospital drugs distribution with autonomous robot vehicles. In Proceedings of the 2020 IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, China, 20–21 August 2020; pp. 1025–1030. [Google Scholar]
  25. Amin, R.; Islam, S.H.; Biswas, G.; Khan, M.K.; Kumar, N. A robust and anonymous patient monitoring system using wireless medical sensor networks. Future Gener. Comput. Syst. 2018, 80, 483–495. [Google Scholar] [CrossRef]
  26. Bajeh, A.O.; Mojeed, H.A.; Ameen, A.O.; Abikoye, O.C.; Salihu, S.A.; Abdulraheem, M.; Oladipo, I.D.; Awotunde, J.B. Internet of robotic things: Its domain, methodologies, and applications. In Emergence of Cyber Physical System and IoT in Smart Automation and Robotics; Springer: Berlin/Heidelberg, Germany, 2021; pp. 135–146. [Google Scholar]
  27. Khan, Z.H.; Siddique, A.; Lee, C.W. Robotics utilization for healthcare digitization in global COVID-19 management. Int. J. Environ. Res. Public Health 2020, 17, 3819. [Google Scholar] [CrossRef]
  28. Krell, E.; Sheta, A.; Balasubramanian, A.P.R.; King, S.A. Collision-free autonomous robot navigation in unknown environments utilizing PSO for path planning. J. Artif. Intell. Soft. Comput. Res. 2019, 9, 267–282. [Google Scholar] [CrossRef]
  29. Hopko, S.K.; Mehta, R.K.; Pagilla, P.R. Physiological and perceptual consequences of trust in collaborative robots: An empirical investigation of human and robot factors. Appl. Ergon. 2023, 106, 103863. [Google Scholar] [CrossRef] [PubMed]
  30. Parsons, H.M.; KearsleY, G.P. Human Factors and Robotics: CurrentStatus and-Future. Science 1980, 208, 1327–1335. [Google Scholar]
  31. Khan, H.R.; Haura, I.; Uddin, R. RoboDoc: Smart Robot Design Dealing with Contagious Patients for Essential Vitals Amid COVID-19 Pandemic. Sustainability 2023, 15, 1647. [Google Scholar] [CrossRef]
  32. Mamun, K.A.; Sharma, A.; Islam, F.; Hoque, A.; Szecsi, T. Patient Condition Monitoring Modular Hospital Robot. J. Softw. 2016, 11, 768–786. [Google Scholar] [CrossRef]
  33. Mišeikis, J.; Caroni, P.; Duchamp, P.; Gasser, A.; Marko, R.; Mišeikienė, N.; Zwilling, F.; De Castelbajac, C.; Eicher, L.; Früh, M. Lio-a personal robot assistant for human-robot interaction and care applications. IEEE Robot. Autom. Lett. 2020, 5, 5339–5346. [Google Scholar] [CrossRef]
  34. Cantone, A.A.; Esposito, M.; Perillo, F.P.; Romano, M.; Sebillo, M.; Vitiello, G. Enhancing Elderly Health Monitoring: Achieving Autonomous and Secure Living through the Integration of Artificial Intelligence, Autonomous Robots, and Sensors. Electronics 2023, 12, 3918. [Google Scholar] [CrossRef]
  35. Rai, A.; Kundu, K.; Dev, R.; Keshari, J.P.; Gupta, D. Design and development Virtual Doctor Robot for contactless monitoring of patients during COVID-19. Int. J. Exp. Res. Rev. 2023, 31, 42–50. [Google Scholar] [CrossRef]
  36. Mireles, C.; Sanchez, M.; Cruz-Ortiz, D.; Salgado, I.; Chairez, I. Home-care nursing controlled mobile robot with vital signal monitoring. Med. Biol. Eng. Comput. 2023, 61, 399–420. [Google Scholar] [CrossRef]
  37. Antony, M.; Parameswaran, M.; Mathew, N.; Sajithkumar, V.; Joseph, J.; Jacob, C.M. Design and implementation of automatic guided vehicle for hospital application. In Proceedings of the 2020 5th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 10–12 June 2020; pp. 1031–1036. [Google Scholar]
  38. Topini, A.; Sansom, W.; Secciani, N.; Bartalucci, L.; Ridolfi, A.; Allotta, B. Variable admittance control of a hand exoskeleton for virtual reality-based rehabilitation tasks. Front. Neurorobot. 2022, 15, 789743. [Google Scholar] [CrossRef]
  39. Kalinov, I.; Trinitatova, D.; Tsetserukou, D. Warevr: Virtual reality interface for supervision of autonomous robotic system aimed at warehouse stocktaking. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 17–20 October 2021; pp. 2139–2145. [Google Scholar]
  40. Ponomareva, P.; Trinitatova, D.; Fedoseev, A.; Kalinov, I.; Tsetserukou, D. Grasplook: A vr-based telemanipulation system with r-cnn-driven augmentation of virtual environment. In Proceedings of the 2021 20th International Conference on Advanced Robotics (ICAR), Ljubljana, Slovenia, 6–10 December 2021; pp. 166–171. [Google Scholar]
  41. Solanes, J.E.; Muñoz, A.; Gracia, L.; Tornero, J. Virtual reality-based interface for advanced assisted mobile robot teleoperation. Appl. Sci. 2022, 12, 6071. [Google Scholar] [CrossRef]
  42. Garg, G.; Kuts, V.; Anbarjafari, G. Digital twin for fanuc robots: Industrial robot programming and simulation using virtual reality. Sustainability 2021, 13, 10336. [Google Scholar] [CrossRef]
  43. Tähemaa, T.; Bondarenko, Y. Digital twin based synchronised control and simulation of the industrial robotic cell using virtual reality. J. Mach. Eng. 2019, 19, 128–144. [Google Scholar]
  44. Laaki, H.; Miche, Y.; Tammi, K. Prototyping a digital twin for real time remote control over mobile networks: Application of remote surgery. IEEE Access 2019, 7, 20325–20336. [Google Scholar] [CrossRef]
  45. Rubio, F.; Valero, F.; Llopis-Albert, C. A review of mobile robots: Concepts, methods, theoretical framework, and applications. Int. J. Adv. Robot. Syst. 2019, 16, 1729881419839596. [Google Scholar] [CrossRef]
  46. Siegwart, R.; Nourbakhsh, I.R. Introduction to Autonomous Mobile Robots; A Bradford Book; The MIT Press: Cambridge, MA, USA; London, UK, 2004; ISBN 0-262-19502-X. [Google Scholar]
  47. Medina Sánchez, C.; Zella, M.; Capitán, J.; Marrón, P.J. From perception to navigation in environments with persons: An indoor evaluation of the state of the art. Sensors 2022, 22, 1191. [Google Scholar] [CrossRef] [PubMed]
  48. Fusic, S.; Sugumari, T. A review of perception-based navigation system for autonomous mobile robots. Rec. Patent. Eng. 2023, 17, 13–22. [Google Scholar] [CrossRef]
  49. Nakhaeinia, D.; Tang, S.H.; Noor, S.M.; Motlagh, O. A review of control architectures for autonomous navigation of mobile robots. Int. J. Phy. Sci. 2011, 6, 169–174. [Google Scholar]
  50. Chen, H.; Hailey, D.; Wang, N.; Yu, P. A review of data quality assessment methods for public health information systems. Int. J. Environ. Res. Public Health 2014, 11, 5170–5207. [Google Scholar] [CrossRef]
  51. Connelly, L.M. Introduction to analysis of variance (ANOVA). Medsurg. Nurs. 2021, 30, 158–218. [Google Scholar]
  52. Lee, Y.W.; Pipino, L.L.; Funk, J.D.; Wang, R.Y. Journey to Data Quality; The MIT Press: Cambridge, MA, USA; London, UK, 2009; p. 240. [Google Scholar]
  53. Liaw, S.-T.; Rahimi, A.; Ray, P.; Taggart, J.; Dennis, S.; de Lusignan, S.; Jalaludin, B.; Yeo, A.; Talaei-Khoei, A. Towards an ontology for data quality in integrated chronic disease management: A realist review of the literature. Int. J. Med. Inform. 2013, 82, 10–24. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) VT in the VE. (b) PT in the RE.
Figure 1. (a) VT in the VE. (b) PT in the RE.
Sensors 24 05840 g001
Figure 2. Graphical abstract of the proposed system.
Figure 2. Graphical abstract of the proposed system.
Sensors 24 05840 g002
Figure 3. (a) ∆ABC for avoiding front, front and right obstacles. (b) ∆ABC for avoiding front and left obstacles. (c) Calculated straight distance after avoiding front, front and right obstacles. (d) Calculated straight distance (AC) after avoiding front and left obstacles.
Figure 3. (a) ∆ABC for avoiding front, front and right obstacles. (b) ∆ABC for avoiding front and left obstacles. (c) Calculated straight distance after avoiding front, front and right obstacles. (d) Calculated straight distance (AC) after avoiding front and left obstacles.
Sensors 24 05840 g003
Figure 4. (a) Right trapezoid ABCD. (b) Right trapezoid with rectangle XBCD, and ∆AXD. (c) AY.
Figure 4. (a) Right trapezoid ABCD. (b) Right trapezoid with rectangle XBCD, and ∆AXD. (c) AY.
Sensors 24 05840 g004
Figure 5. Flow chart for the decision-making algorithm.
Figure 5. Flow chart for the decision-making algorithm.
Sensors 24 05840 g005
Figure 6. Experimental scenario of the proposed system.
Figure 6. Experimental scenario of the proposed system.
Sensors 24 05840 g006
Figure 7. (a) ME and SD values of Cat 1; (b) ME and SD values of Cat 2.
Figure 7. (a) ME and SD values of Cat 1; (b) ME and SD values of Cat 2.
Sensors 24 05840 g007
Table 1. Functions and keywords for the algorithm.
Table 1. Functions and keywords for the algorithm.
S. NoFunctions/KeywordsDescription
1Forward()Moves forward
2Backward()Moves backward
3Right()Turns right
4Left()Turns left
5nLeft()Turns left 90°
6fLeft()Turns left 45°
7fRight()Turns right 45°
8nRight()Turns right 90°
9eTurn()Turns right 180°
10mForwardMoves forward 0.5 m
11sForward()Moves forward 0.7 m
12oForward()Moves forward 1 m
13eForward()Moves forward 0.8 m
14mReverse()Moves backward 0.5
15Stop()Comes to halt
16Wait()Waits for 10 s
17TP1Target position 1 (19 m)
18TP2Target position 2 (15.62)
19TP3Target position 3 (12.24)
20TP4Target position 4 (8.86)
21DistDistance
22oLeftLeft obstacle distance less than 0.4 cm
23oRightRight obstacle distance less than 0.4 cm
24oFrontFront obstacle distance less than 0.4 cm
25UkUpkey
26DkDown key
27RkRight key
28LkLeft key
29aModeAutonomous mode
30mModeManual mode
31UCUser command
32oAvoidObstacle avoidance
33SPStarting point
Table 2. Navigation accuracy obtained by averaging 10 trial values of each task in Cat 1.
Table 2. Navigation accuracy obtained by averaging 10 trial values of each task in Cat 1.
TasksTarget PositionsObstaclesDa(m)MDc (m)ErrorAccuracy
Task 1TP13837.212.08%97.92%
Task 2TP231.2430.691.76%98.24%
Task 3TP324.4824.081.63%98.37%
Task 4TP417.7217.451.52%98.48%
Table 3. Navigation accuracy obtained by averaging 10 trial values of each task in Cat 2.
Table 3. Navigation accuracy obtained by averaging 10 trial values of each task in Cat 2.
TasksTarget PositionsObstaclesObstacles StatusObstacles PositionsDa(m)MDc (m)ErrorAccuracy
Task 5TP11StaticTP3 (12.24 m) Front3837.052.50%97.50%
Task 6TP12StaticTP3 Front, Left3837.052.50%97.50%
Task 7TP12StaticTP3 Front, Right3837.042.53%97.47%
Task 8TP13StaticTP3 Front, Right, Left38372.64%97.37%
Task 9TP11MovingTP3 Front3837.042.53%97.47%
Table 4. ME and SD of 10 trial values for each task in Cat 1.
Table 4. ME and SD of 10 trial values for each task in Cat 1.
Tasks ME (cm) SD
Task 17913.93
Task 2557.31
Task 3407.71
Task 4278.18
Table 5. ME and SD of 10 trial values for each task in Cat 2.
Table 5. ME and SD of 10 trial values for each task in Cat 2.
Tasks ME (cm) SD
Task 59514.18
Task 695.514.2
Task 795.914.19
Task 8100.316.77
Task 995.914.14
Table 6. ANOVA test result for Cat 1.
Table 6. ANOVA test result for Cat 1.
Fdfp-Value
Errors53.1930.012
Table 7. ANOVA test result for Cat 2.
Table 7. ANOVA test result for Cat 2.
Fdfp-Value
Errors0.2240.985
Table 8. DQ dimensions of experimental Tasks 1 and 4 based on temperature, heartbeat and oxygen saturation sensors.
Table 8. DQ dimensions of experimental Tasks 1 and 4 based on temperature, heartbeat and oxygen saturation sensors.
TasksSensorsAccuracyCompletenessTimeliness
Task 1Heartbeat0.9770.9840.967
Oxygen0.9860.9810.967
Temperature0.9810.9810.967
Task 4Heartbeat0.9780.9850.912
Oxygen0.9870.9860.912
Temperature0.9840.9820.912
Table 9. Comparative analysis of robot navigation systems.
Table 9. Comparative analysis of robot navigation systems.
SystemsTechnologiesConnectivityInterfaceModeServicesEvaluation ProtocolNavigation AccuracyComparative Analysis
[31]RoboticsWi-Fi,
Bluetooth
Desktop-based GUIManualRPM
[32]RoboticsBluetooth,
Wi-Fi
Web-basedAutonomousRPM,
Detecting patients lying on floor
[33]RoboticsWi-FiWeb-basedAutonomousDetecting infected patients,
Surface disinfection
85.5%
[34]IoRTWi-Fi,
IEEE 802.22,
Ethernet
Telegram bot interfaceAutonomousElderly people monitoring
[36]Robotics,
IoT
Wi-FiMobile ApplicationAutonomousRPM
[37]RoboticsZigBeeGUIAutonomousRPM,
Gait cycle assistance
[38]RoboticsBluetooth,
Internet
Android ApplicationAutonomous,
Manual
RPM,
Medicine delivery,
Waste collection
Proposed SystemDTs-based IoRTBluetooth,
NRF24L01+
Desktop-based VRAutonomous,
Manual
RPM97.81%
Table 10. Comparative analysis of DQ dimensions calculated using temperature, heartbeat and oxygen sensors data.
Table 10. Comparative analysis of DQ dimensions calculated using temperature, heartbeat and oxygen sensors data.
SystemsAccuracyCompletenessTimelinessComparative Analysis
[31]
[32]
[33]
[34]
[36]
[37]0.970
[38]
Proposed System0.9820.9830.940
Table 11. Comparative analysis of VR and DTs-based robotic systems.
Table 11. Comparative analysis of VR and DTs-based robotic systems.
PapersSystemsTechnologiesDescriptionApplicationsAutonomous OperationExternal Sensors ConnectivityComparative Analysis
[39]VR systemVRControlling mobile robotTask inspection
[40]Robotic hand exoskeletonVR,
DT
Observing virtual objectsRehabilitation
[41]WareVRVR,
DT
Monitoring autonomous robotTransporting stock in a warehouse
[42]Robotic armVR,
DT
TeleoperationConducting laboratory tests
[43]DTs-based robotic systemVR,
DT
Controlling a FANUC robotIndustrial processes
[44]DTs-based robotic systemVR,
DT
Controlling Industrial manufacturing robotsIndustrial processes
[45]Robotic armVR,
DT
To perform remote surgeryMedical purpose
Proposed SystemDTs-based IoRTDTs, IoRT, VRControl and monitor autonomous robotRPM
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Khan, S.; Ullah, S.; Ullah, K.; Almutairi, S.; Aftan, S. Implementing Autonomous Control in the Digital-Twins-Based Internet of Robotic Things for Remote Patient Monitoring. Sensors 2024, 24, 5840. https://doi.org/10.3390/s24175840

AMA Style

Khan S, Ullah S, Ullah K, Almutairi S, Aftan S. Implementing Autonomous Control in the Digital-Twins-Based Internet of Robotic Things for Remote Patient Monitoring. Sensors. 2024; 24(17):5840. https://doi.org/10.3390/s24175840

Chicago/Turabian Style

Khan, Sangeen, Sehat Ullah, Khalil Ullah, Sulaiman Almutairi, and Sulaiman Aftan. 2024. "Implementing Autonomous Control in the Digital-Twins-Based Internet of Robotic Things for Remote Patient Monitoring" Sensors 24, no. 17: 5840. https://doi.org/10.3390/s24175840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop