Next Article in Journal
Computer Vision Method for In Situ Measuring Forming Accuracy of 3D Sand Mold Printing
Previous Article in Journal
Coordinated Lateral Stability Control of Autonomous Vehicles Based on State Estimation and Path Tracking
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

The Design of a Low-Cost Sensing and Control Architecture for a Search and Rescue Assistant Robot

School of Mechanical Automotive Engineering, Kyungil University, Gyeongsan 38428, Republic of Korea
*
Author to whom correspondence should be addressed.
Machines 2023, 11(3), 329; https://doi.org/10.3390/machines11030329
Submission received: 29 January 2023 / Revised: 16 February 2023 / Accepted: 23 February 2023 / Published: 26 February 2023
(This article belongs to the Section Automation and Control Systems)

Abstract

:
At a disaster site, unforeseen circumstances can severely limit the activities of rescue workers. The best solution is for a cooperative team of robots and rescue workers to complete the rescue work. Therefore, in this paper, we propose a simple and low-cost sensing and control architecture for a search and rescue assistant robot using a thermal infrared sensor array, an ultrasonic sensor, and a three-axis accelerometer. In the proposed architecture, we estimate the location of human survivors using a low-cost thermal IR sensor array and generate and control the trajectory of approaching the searched human survivors. Obstacle avoidance and control are also possible through 3D position estimation of obstacles using 1D ultrasonic sensor integration. In addition, a three-axis accelerometer is used to estimate the tilt angle of the robot according to terrain conditions, and horizontal control of the storage box angle is performed using this feature. A prototype robot was implemented to experimentally validate its performance and can be easily constructed from inexpensive, commonly available parts. The implementation of this system is simple and cost-effective, making it a viable solution for search and rescue operations. The experimental results have demonstrated the effectiveness of the proposed method, showing that it is capable of achieving a level storage box and identifying the location of survivors while moving on a sloped terrain.

1. Introduction

At disaster sites where buildings collapse or release hazardous substances as a result of natural or man-made disasters, the activities of rescue workers can be severely limited by the hazardous environment, which can affect humans. Nonetheless, the main objective of rescue workers is to find as many survivors as possible in a given situation and to minimize loss of life. However, when rescue workers search for survivors, they have no choice but to rely on sight and hearing. At this time, if the victim is unconscious, they cannot make a sound requesting rescue, and it is very difficult to determine the location of the survivor if they are out of sight of the rescuers. One way to deal with this is to use search dogs.
However, as rescue workers and search dogs are living beings, they may be exposed to emotional and psychological trauma, and mistakes may occur in search and rescue. Therefore, research on search and rescue robots is being actively conducted to solve these problems. Search and rescue robots can be broadly classified into two categories. One is a method in which the robot autonomously performs search and rescue [1,2,3,4,5,6], and the other is a method in which humans and robots cooperate to search for and rescue survivors [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. Of the two methods, the second is more effective at searching for and rescuing survivors. The reason is that the environment for searching for and rescuing survivors is extremely complicated, and there are technical limitations that must be overcome in order to perform the role alone without human intervention.
Therefore, the best solution is for the rescue work to be completed by a cooperative team of a robot and rescue worker. The role of the robot is to find the survivors’ location and move to the searched location, and the rescue worker follows the robot to rescue the survivors. At this time, the robot loads and moves various items necessary for rescue in place of the rescue worker. Existing studies related to this scenario almost always focus on improvements in the following technologies.
-
Search for survivors [7,8,9,10];
-
Recognition of risk factors and the surrounding environment [11,12];
-
Mobility without limitations on irregular terrain [13,14];
-
Obstacle avoidance in irregular environments [7,8,9,14,15,16];
-
Interaction between rescue workers and collaborative robots [14,17,18,19];
-
Interaction with rescue headquarters [19,20,21,22,23].
All of these technologies are essential for search and rescue robots and have individually achieved the desired performance in each technical field. However, the development of robots that integrate and operate all of these technologies systematically is rare. Integrating all of the described technologies can significantly increase hardware and software costs. Therefore, in this paper, we aim to propose a low-cost, high-efficiency, assistant robot architecture that primarily focuses on survivor search and movement control, which are the most crucial among the above technologies. The others are beyond the scope of this paper and will be left for future work. However, research on obstacle avoidance, safe storage, and loading of rescue items has been added as an ancillary function.
The proposed search and control architecture is unique compared with prior designs as it focuses on utilizing minimal sensor resources for maximum efficiency. This results in a compact and straightforward algorithm design, making it feasible for implementation on lower-performance processors. However, prior human survivor sensing architectures have emphasized enhancing sensing accuracy through the use of costly camera-based sensors [7,8,9,10]. These designs relied on complex deep learning algorithms, extensive image processing, and a large amount of image data to detect human survivors, necessitating higher data storage and a more powerful processor for computation.
The overall structure of the robot proposed in this paper is illustrated in Figure 1. As shown in Figure 1, the designed robot consists of two main parts: (1) a sensory system and (2) actuation systems. The sensory system uses an ultrasonic sensor, thermal IR array sensor, and three-axis accelerometer to detect obstacles, human survivors (rescue requesters), and the tilt angle of the storage box, respectively. The actuation systems utilize a linear actuator to regulate the tilt angles of the storage box to horizontal and two caterpillars with DC motors to drive the robot body.
The Arduino Mega in Figure 1 is the main controller of the robot, which processes information from the sensory system and generates control commands for the actuation systems based on the information. Figure 2 shows the overall block diagram of the connections and signal flow, as well as the roles of the components in Figure 1.
The remainder of this paper is organized as follows. In Section 2, the theoretical methodologies of the proposed sensor architecture for human survivors, obstacles, and body tilt angle detection using a thermal IR sensor array, ultrasonic sensors, and three-axis accelerometer, respectively, are presented. In Section 3, we describe the control architecture to implement the proposed methodologies. Then, in Section 4, the experimental results are presented to verify the proposed methods. Finally, in Section 5, we draw conclusions and provide an outlook on future works.

2. Sensing Architecture

2.1. Human Survivor Location Detection Using a Thermal IR Array Sensor

In order to detect the positions of human survivors, we chose a thermal IR array sensor. The performance of this sensor is similar to that of thermal IR cameras when detecting humans, but it is more compact and cheaper. The q × q IR sensors are arranged as shown in Figure 3. Each sensor in the array measures the temperature of both humans and the ambient environment.
To detect the positions of human survivors, we divided the array into three areas: A, B, and C, as shown in Figure 3. This division is for simple recognition of the direction the robot needs to be driven when controlling both DC motors in its body. Using these areas, the robot can approximate whether it should turn right, turn left, or maintain its current position. This method is then refined using Equation (2), which allows the robot to more accurately estimate the location of the detected survivor. Here, we assumed that area B is the robot’s heading direction (center of the sensor array). For instance, if the position of a human exists in area A or C instead of area B, a motor control command is generated for the robot to move its front to head towards area B so that the human’s temperature position is placed in area B.
Since the typical temperature range of a human body is 35.5~37.5 °C, we chose 34~38 °C as the threshold to detect the position of a human body, as shown in
T A = l = 1 L ( 34 ~ 38 ) T B = m = 1 M ( 34 ~ 38 ) T C = n = 1 N ( 34 ~ 38 )
where TA, TB, TC, l, m, and n are the total human body temperature summation and the number of sensor cells detecting 34~38 °C in areas A, B, and C, respectively.
Based on Equation (1) and Tmax defined as Tmax = max(TA, TB, TC), the horizontal position angle α of a human survivor, as shown in Figure 4, is detected as
α = { ( γ × 2 × i h + 3 ) + ( γ × 3 × i h + 4 ) + + [ γ × ( q 2 1 ) × i q 1 ] + [ γ × ( q 2 ) × i q ] j = h + 3 q i j , if   T max = T A ( γ × i h + 2 ) + ( γ × i h + 1 ) j = h + 1 h + 2 i j , if   T max = T B [ γ × ( q 2 1 ) × i 2 ] + [ γ × ( q 2 ) × i 1 ] + + ( γ × 2 × i h ) + ( γ × 3 × i h 1 ) k = 1 h i k , if   T max = T C
where i1 to iq are the number of sensor cells detecting 34~38 °C in columns 1 to q, shown in Figure 4, respectively. γ is the viewing angle resolution defined as β/q, where the viewing angle of the IR sensor array is ±β degrees, and h is the number of the last column in area C.

2.2. Obstacle Detection and Storae Box Tilt Angle Detection

Obstacle detection is performed by ultrasonic sensors. Figure 5 is the configuration diagram of ultrasonic sensors for measuring the position of an obstacle in a three-dimensional space. Let d1, d2, d3, and d4 be the measured distances from the sensors located at (a, 0, 0), (0, 0, 0), (0, b, 0), and (0, −b, 0) in the coordinate system (x, y, z) to an obstacle, respectively. The distance d* from the ultrasonic sensor with the notation number * is measured as follows
d * = c t Δ t * 2 ,   c t = 331.3 + 0.606 T   [ m / s ]
where ct is the speed of sound; Δt* is the ultrasonic propagation time, called time-of-flight (TOF), from the sensor with the designation number * to the object; and T is the ambient temperature. Using Equation (3) and geometric information, the three-dimensional position (φx, φy, φz) of the object using the ultrasonic sensor can be derived as follows
φ x = d 2 2 d 1 2 2 a + a 2 ,   φ y = d 2 2 d 3 2 2 b + b 2 ,   φ z = d 2 2 p x 2 p y 2
After this, the angular position λ of an obstacle in x-y plane can be defined as
λ = tan 1 ( φ x φ y )
The tilt angle θ of the storage box is measured with an accelerometer and calculated as
θ = tan 1 ( A y A z )
where Az and Ay are the acceleration on the z-axis and y-axis, respectively. The measurement setup using an accelerometer is shown in Figure 6.

3. Control Architecture

3.1. Target Orientation and Target Trajectory of Robot Motion

Assuming that the number of human survivors in front of the robot is n, a set of position angles of human survivors, A(t), can be defined as
A ( t ) = [ α 1 ( t ) α 2 ( t ) α i ( t ) ] T ,   i = 1 , , n
where αi(t) is obtained from Equation (2). This set can include the position angles of multiple human survivors, and can be used to guide the robot’s motion towards the survivors.
Using the angle set (7), a target orientation of the robot, pt(t), is defined as
p t ( t ) = m e a n i = 1 , , n [ α i ( t ) min i = 1 , , n ( α i ( t ) ) ] + min i = 1 , , n ( α i ( t ) )
The reason for this definition is as follows: if there is only a single survivor, the robot’s ideal target orientation will match the survivor’s position. However, if there are multiple survivors, the target orientation should point towards the approximate center of the overall survivor distribution, not a specific survivor. In other words, Equation (8) is defined so that the front of the robot faces the middle position of the multiple survivors when the robot searches for human survivors and moves towards them. The ultimate mission of rescuing survivors is carried out by rescue workers. Therefore, when there exist multiple survivors, the proposed method assists the rescue workers by enabling the robot to guide the workers to the closest location of the survivors.
Thus, according to the geometry in Figure 4, the robot’s orientation, pt(t), should be set to 0 degrees so that it always faces the front of the survivor. If pt(t) is not 0 degrees, the robot is controlled to move towards this orientation. For example, if the survivor’s position is measured as pt(t0) at an initial time t0, as shown in Figure 7, the robot’s target orientation at t0 should be pt(t0). This point serves as the starting point for the robot’s orientation control trajectory, p(t). At the final time tF, when the control is terminated, the robot’s heading direction should converge so that p(tF) is set to 0 degrees. Therefore, as shown by the dotted green line in Figure 7, the target trajectory from the initial orientation to the target orientation is defined by
p ( t ) = p t ( t 0 ) ( 1 e κ ( t ) τ ) , t [ t 0 , t F ] κ ( t ) = κ ( t 1 ) + Δ p , κ ( t ) [ 0 , p t ( t 0 ) + ε p ]
where τ(t), Δp, and εp denote the time constant of the trajectory, the increment of the p-angle of p(t), and the final control error for the target trajectory at tF, respectively. The resulting target orientation, p t ¯ ( t ) , at each time step after following p(t) can be estimated using
p ¯ t ( t ) = p t ( t 0 ) p ( t ) ,   t [ t 0 , t F ]
As a result, the target orientation measured at tF when the control is completed is given by
p t ( t F ) = 0 ± ε p

3.2. Control Command Generation with Ostacle Avoidance to Drive Two Carterpillars and Linear Actuator

In order to move the robot to human survivors using the target orientation and trajectory obtained from Equations (8) and (9), the two caterpillars need to be properly controlled, taking into account the mobile robot kinematics. Figure 8 shows the mobile robot kinematic model with two caterpillars. According to curvilinear and rotational kinematics, the robot’s linear and angular velocity, v(t) and ω(t), are expressed, respectively, as follows
ω ( t ) = v L ( t ) v R ( t ) L v ( t ) = v L ( t ) + v R ( t ) 2
where v L (t) and v R (t) are the driving velocities of the left and right caterpillars, respectively.
As ω(t) can be obtained by taking the derivative of p(t) in Equation (9) and v(t) is given to move the robot at an appropriate speed, only v L (t) and v R (t) are unknown variables in Equation (12). Additionally, when the robot approaches a human survivor within a distance εφ from its front, the robot should stop moving. Therefore, the resulting velocities of the caterpillars can be derived as
( λ ( t ) = p t ( t ) ± ε p ) & ( φ y ( t ) ε φ ) v L ( t ) = v R ( t ) = 0 ( o t h e r s ) v L ( t ) = L p ˙ ( t ) + 2 v ( t ) 2 ,   v R ( t ) = L p ˙ ( t ) 2 v ( t ) 2
For the two caterpillars to produce a velocity in Equation (13), the caterpillars should be driven by a DC motor whose angular velocities are expressed by
( λ ( t ) = p t ( t ) ± ε p ) & ( φ y ( t ) ε φ ) ω m L ( t ) = ω m R ( t ) = 0 ( o t h e r s ) ω m L ( t ) = L p ˙ ( t ) + 2 v ( t ) 2 r ,   ω m R ( t ) = L p ˙ ( t ) 2 v ( t ) 2 r
where r, ω m L (t), and ω m L (t) are the radius of rotation of the caterpillar and the driving angular velocities of the left and right DC motors, respectively.
When the robot is inclined or declined owing to terrain conditions, the robot’s storage box also tilts at an angle θ(t), as shown in Figure 6. The storage box should be leveled so that multiple items can be safely stored, regardless of terrain conditions. Therefore, the robot should regulate the angle θ(t) to 0 degrees during the entire time horizon and control the linear actuator with stroke variation for this regulation.
Based on the geometry as shown in Figure 9, the stroke variation Δs(t) with respect to the angle θ(t) can be derived as
Δ s ( t ) = l 2 + a 2 + 2 a l sin θ ( t ) s ,   t [ t 0 , t F ]
where l, a, and s are the height between the storage box and robot base, the length of the storage box, and the length of the linear actuator at zero stroke, respectively.

3.3. Control Algorithm with Obstacle Avoidance

Figure 10 shows the overall block diagram of the robot motion control with obstacle avoidance for reaching a human survivor using Equations (8)–(10), (13), and (14), and the sensing architecture. This controller consists of a feed-forward compensator, called a target trajectory compensator, a speed controller to drive the two caterpillars, and an obstacle avoidance algorithm.
The target trajectory compensator corrects the error between the trajectory command p t ¯ ( t ) and the measured trajectory pt(t) by the δ interference of the obstacle avoidance algorithm, as shown in
p c ( t ) = { p ( t ) + K ( p ¯ t ( t ) p t ( t ) ) , if   λ ( t ) > p t ( t ) + δ   or   λ ( t ) < p t ( t ) + δ p ( t ) δ + K ( p ¯ t ( t ) p t ( t ) ) , if   p t ( t ) < λ ( t ) < p t ( t ) + δ   p ( t ) + δ + K ( p ¯ t ( t ) p t ( t ) ) , if   p t ( t ) δ < λ ( t ) < p t ( t )
where pc(t) and K are a compensated target trajectory and compensation gain, respectively. K is chosen to satisfy the margin of error required for control performance. δ is a configuration space for the robot to avoid obstacles.
The reason for using the target trajectory compensator is as follows: the speed control commands (13) for each caterpillar were derived by assuming that p(t) equals p t ¯ ( t ) . However, the output accuracy of pt(t) may vary depending on the control performance of the caterpillar speed controller. Therefore, to approximate pt(t) ≈   p t ¯ ( t ) , the target trajectory compensation algorithm is designed as Equation (16).
The speed controller for caterpillars is illustrated in Figure 11. The velocity commands v L (t) and v R (t) for each caterpillar, as well as the angular velocity commands ω m L (t) and ω m R (t) for each DC motor, are generated using the output pc(t) of the trajectory compensator, the required robot velocity v(t), and the obstacle location λ(t) and φy(t)—Equations (13) and (14). The velocity v(t) cannot be directly measured because of the lack of a speed sensor. To calculate the required velocities of the left and right caterpillars, this paper employs the constant v(t), which is used solely for this purpose. v(t) is further utilized to determine the required angular velocity of the two DC motors driving the caterpillar. It is assumed that v(t) is under proper control by monitoring the output error of the DC motor velocity. In addition, for the purposes of this paper, the information contained in v(t) is deemed unimportant and will not be discussed.
As each caterpillar uses a DC motor with an encoder, the measured velocity is used as feedback information to increase control accuracy. The control method employed is the PID control algorithm, with the gain of the controller determined through trial and error.
Additionally, a position controller is depicted in Figure 12 to level the storage box, so that multiple items can be safely stored in the box regardless of terrain conditions. As the robot needs to regulate the angle θ(t) to 0 degrees at all times, the regulation command will be 0 degrees, and the measured tilt angle θ(t) will be feedback information.
The stroke command Δs(t) for a linear actuator is generated using Equation (15) with an error angle θerr(t) between 0 degrees and θ(t). The linear actuator is then controlled to adjust Δs(t) with the measured position feedback. The control method used is the PID control algorithm, and the gain of the PID controller is determined through trial and error.

4. Experiment

4.1. Experiental Setup

To evaluate the effectiveness of the proposed method, a prototype robot system was built as illustrated in Figure 1 and Figure 2 and shown in Figure 13. The prototype features a single linear actuator and storage box, as shown in Figure 6 and Figure 9, and is powered by two DC motors. The linear actuator utilized in the experiment is the LM4075 from Motorbank (Seoul, Korea), with a maximum stroke of ±60 mm and a maximum load capacity of 900N. The robot’s movement is achieved through two caterpillars, driven by D&JWith (Seoul, Korea)’s RB35GM 09Type motors, each with a rated torque of 6.0 kgf-cm and a gear reduction ratio of 150. Both types of actuators are equipped with encoders for precise speed and position control.
The prototype robot was equipped with a thermal IR sensor array, an ultrasonic sensor, and an accelerometer. The specific models chosen for the prototype are Adafruit’s AMG8833 for the thermal IR sensor array, HC-SR04 for the ultrasonic sensor, and Analog Devices’ ADXL345 for the accelerometer. The specifications for these sensors can be found in Table 1.
All data are transmitted to the main processor via the I2C protocol. The main processor used in the prototype is an Arduino Mega 2560 (Turin, Italy), which receives and transmits all information from the sensory system and actuation system through its built-in I2C module. Additionally, motor driving circuits are used to accurately control the motors. With this information, the main processor is programmed to incorporate all control algorithms, as illustrated in Figure 10, Figure 11 and Figure 12. Table 2 provides a summary of the specific parameter values used in designing the control algorithm and conducting the experiment.

4.2. Experimental Results

(Case I) The experimental scenario and expected outcomes are as follows: A human, assumed to be a survivor, and a sloped terrain are positioned at an arbitrary angle relative to the robot’s moving direction. The robot employs its sensors to determine the tilt angle of the terrain and the location of the survivor. Then, the robot moves towards the survivor, maintaining the level of its storage box using the control algorithms outlined in Section 3.
As illustrated in Figure 14, the robot is able to effectively navigate towards the human survivor while keeping the storage box level, regardless of the slope of the terrain. Once the robot reaches a safe distance from the survivor, it stops, effectively demonstrating the efficacy of the proposed method. Overall, the experimental results demonstrate that the proposed method is capable of maintaining a level storage box and identifying the location of survivors while navigating on sloped terrain.
The experimental results during this scenario were obtained as shown in Figure 15, Figure 16 and Figure 17. Initially, a human position is measured at pt(0) = 30 degrees. Using this initial position, the robot creates a target trajectory p(t) toward the human, as indicated by the blue line in Figure 15. The resulting motor speed commands were generated to track the target trajectory, as indicated by the blue lines in Figure 16. As shown by the red dot lines in Figure 15 and Figure 16, when both motors were controlled to adjust the blue lines, the robot successfully followed the target trajectory and the measured position p t ¯ ( t ) of the human survivor was also adjusted to 0 degrees.
As illustrated in Figure 16, when the two motors are controlled such that the blue lines become red lines, the robot effectively follows the target trajectory. The measured position of the human eventually converges to 0 degrees, as shown in Figure 15. From Figure 16, it can be observed that the condition outlined in Equation (14) functions well, as the motor speed command becomes 0 after approximately 45 s. There may be some errors and response delays present in both the transient and steady state, owing to the dynamics of the motors and controllers. These response conditions of the motors may also cause trajectory errors and measurement position errors, as depicted in Figure 15. Nevertheless, as these error bounds fall within the acceptable control range, they do not pose a problem for searching for human survivors.
As the robot begins to navigate the terrain, the tilt angle of the storage box is measured, as depicted in the first plot of Figure 17. The angle varies depending on the slope of the terrain. Based on the measured angle, a command for the linear motor attached to the storage box is generated in order to maintain an angle of 0 degrees, as indicated by the blue line in the second plot of Figure 17. Owing to the characteristics of the linear motor and its controller, there may be some delay in response to commands and errors may occur. However, these differences between commands and responses are within acceptable limits.
(Case II) The experimental scenarios are as follows. The person presumed to be a survivor is located at a random angle from the final location of the Case I experiment and in the opposite direction to the initial location of the Case I experiment.
As shown in Figure 18, the robot can effectively move towards human survivors. When the robot reaches a safe distance from the survivors, it stops, effectively demonstrating the effectiveness of the proposed method. Overall, the experimental results show that the proposed method identifies the location of survivors regardless of location.
The experimental results during this scenario were obtained as shown in Figure 19 and Figure 20. Initially, a human position is measured at pt(0) = −25 degrees. Using this initial position, the robot creates a target trajectory p(t) toward the human, as indicated by the blue line in Figure 19. The resulting motor speed commands were generated to track the target trajectory, as indicated by the blue lines in Figure 20. As shown by the red dot lines in Figure 19 and Figure 20, when both motors were controlled to adjust the blue lines, the robot successfully followed the target trajectory and the measured position p t ¯ ( t ) of the human survivor was also adjusted to 0 degrees. As illustrated in Figure 20, when the two motors are controlled such that the blue lines become red lines, the robot effectively follows the target trajectory. The measured position of the human eventually converges to 0 degrees, as shown in Figure 19.

5. Conclusions

In this paper, we propose a simple and low-cost sensing and control architecture for a search and rescue assistant robot using a thermal IR sensor array, an ultrasonic sensor, and a three-axis accelerometer.
The proposed architecture allows us to estimate the location of human survivors through a low-cost thermal IR sensor array, generate and control the trajectory for approaching the searched human survivors, and perform obstacle avoidance and control using 3D location estimation of obstacles with a 1D ultrasonic sensor integration. Additionally, we can estimate the tilt angle of the robot according to terrain conditions using a three-axis accelerometer and perform horizontal control of the storage box angle.
All of these methodologies were implemented in a prototype robot that can be easily constructed from low-cost, commonly available parts. We conducted experimental performance tests using this prototype robot. The experimental results show that the performance of the proposed architecture is very useful for the search and rescue of survivors and can be implemented sufficiently with commonly available, inexpensive parts.
However, the proposed methodology also has some limitations. The performance of searching for survivors is dependent on the number of pixels in the thermal IR sensor array. As a result, the accuracy of the estimated location of survivors in the prototype robot may be lower owing to the thermal IR sensor array having only 64 pixels. In particular, it is important to note that the proposed thermal-sensor-array-based robot cannot be used in fire environments, as its performance in detecting survivors in high-temperature environments may be impacted by temperature saturation. To improve the search and rescue performance, it would be ideal to use sensors with better specifications, but this is not feasible owing to high implementation costs and technical limitations.
Nevertheless, the proposed method is more useful than other studies because it is relatively suitable for search and rescue roles, inexpensive, and maximizes the strengths of each sensor while compensating for its weaknesses.
In future work, we need to improve and modify the leveling structure of the storage box for safer delivery. Because the leveling structure in this paper controls only x-axis tilting, the current structure cannot respond to y-axis tilting. This modification remains for future work. In the prototype robot, the sensor’s noise does not impact the control of the robot, as the signal-to-noise ratio is experimentally found to be very small. However, we plan to conduct a stability analysis in future work, as the stability of the system is crucial, even though it falls outside the scope of this current paper.

Author Contributions

Conceptualization, T.H.K., S.H.B., C.H.H. and B.H.; methodology, T.H.K., S.H.B. and B.H.; software, S.H.B. and C.H.H.; validation, T.H.K. and S.H.B.; formal analysis, T.H.K. and B.H.; investigation, B.H.; resources, B.H.; data curation, T.H.K.; writing—original draft preparation, T.H.K. and B.H.; writing—review and editing, B.H.; funding acquisition, B.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF), with a grant funded by the Korean government (MSIT) (No. 2021R1F1A1063895).

Data Availability Statement

Data sharing is not applicable to this article.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Yamauchi, B.M. PackBot: A Versatile Platform for Military Robotics. In Proceedings of the Unmanned Ground Vehicle Technology VI, Orlando, FL, USA, 13–15 April 2004. [Google Scholar]
  2. Adam, J.; Elena, M. DHS/NIST Response Robot Evaluation Exercises. In Proceedings of the IEEE International Workshop on Safety, Security and Rescue Robotics, Rome, Italy, 27–29 September 2006. [Google Scholar]
  3. Yoshida, T.; Keiji, N.; Satoshi, T.; Takeshi, N.; Eiji, K. Improvement to the Rescue Robot Quince Toward Future Indoor Surveillance in the Fukushima Daiichi Nuclear Power Plant. Field Serv. Robot. 2014, 92, 19–32. [Google Scholar]
  4. Murphy, R.R. Disaster Robotics; MIT Press: Boston, MA, USA, 2014. [Google Scholar]
  5. Kim, D.Y.; Lee, J.M.; Shin, D.I.; Shin, S.; Hwang, J.H. Quick Target Position Command for Quadrotor in 3D Indoor Map of Disaster Accident Management Using Robot System. In Proceedings of the IROS2016-IEEE/RSJ International Conference on Intelligent Roots and Systems, Daejeon, South Korea, 9–14 October 2016. [Google Scholar]
  6. Montemerlo, M.; Thrun, S.; Koller, D.; Wegbreit, B. FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem. In Proceedings of the 18th National Conference on Artificial Intelligence, Edmonton, AB, Canada, 28 July–1 August 2002. [Google Scholar]
  7. Saeedi, P.; Sorensen, S.; Hailes, S. Performance-aware exploration algorithm for search and rescue robots. In Proceedings of the 2009 IEEE International Workshop on Safety, Security Rescue Robotics, Denver, CO, USA, 3–6 November 2009. [Google Scholar]
  8. Dey, G.K.; Hossen, R.; Noor, M.S.; Ahmmed, K.T. Distance controlled rescue and security mobile robot. In Proceedings of the 2013 International Conference on Informatics, Electronics and Vision, Dhaka, Bangladesh, 17–18 May 2013. [Google Scholar]
  9. Chowdhury, M.S.S.; Nawal, M.F.; Rashid, T.; Rhaman, K. Terminal analysis of the operations of a rescue robot constructed for assisting secondary disaster situations. In Proceedings of the 2015 IEEE Region 10 Humanitarian Technology Conference, Cebu, Philippines, 9–12 December 2015. [Google Scholar]
  10. Uddin, Z.; Islam, M. Search and rescue system for alive human detection by semi-autonomous mobile rescue robot. In Proceedings of the 2016 International Conference on Innovations in Science, Engineering and Technology, Dhaka, Bangladesh, 28–29 October 2016. [Google Scholar]
  11. Lau, H.Y.K.; Ko, A. An immune robotic system for humanitarian search and rescue (application stream). In Artificial Immune Systems; Castro, L.N., Von Zuben, F.J., Knidel, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2007; pp. 191–203. [Google Scholar]
  12. Takeda, T.; Ito, K.; Matsuno, F. Path generation algorithm for search and rescue robots based on insect behavior parameter optimization for a real robot. In Proceedings of the 2016 IEEE International Symposium on Safety, Security, and Rescue Robotics, Lausanne, Switzerland, 23–27 October 2016. [Google Scholar]
  13. Sheh, R. The Redback: A Low-Cost Advanced Mobility Robot. In Proceedings of the UNSW-CSE Technical Report, Sydney, Australia, 10–15 April 2005. [Google Scholar]
  14. Brik, A.; Kenn, H.; Carpin, S.; Pfingsthorn, M. Toward Autonomous Rescue Robots. In Proceedings of the First International Workshop on Synthetic Simulation and Robotics to Mitigate Earthquake Disasters, Padova, Italy, 6 July 2003. [Google Scholar]
  15. Zaman, H.U.; Hossain, M.S.; Wahiduzzaman, M.; Asif, S. A novel design of a robotic vehicle for rescue operation. In Proceedings of the 2015 18th International Conference on Computer and Information Technology, Dhaka, Bangladesh, 21–23 December 2015. [Google Scholar]
  16. Jiko, M.N.; Shayket, M.H.; Bhuiyan, A.G.; Rabby, G. Design and implementation of amphibious smart rescue robot. In Proceedings of the 2016 2nd International Conference on Electrical, Computer Telecommunication Engineering, Rajshahi, Bangladesh, 8–10 December 2016. [Google Scholar]
  17. Murphy, R. Human-Robot Interaction in Rescue Robotics. IEEE Trans. Syst. Man Cybern. Part C 2004, 34, 138–153. [Google Scholar] [CrossRef]
  18. Ko, A.W.Y.; Lau, H.Y.K. Intelligent robot-assisted humanitarian search and rescue system. Int. J. Adv. Robot. Syst. 2009, 6, 121–128. [Google Scholar] [CrossRef] [Green Version]
  19. Mora Vargas, A.E.; Mizuuchi, K.; Endo, D.; Rohmer, E.; Nagatani, K.; Yoshida, K. Development of a Networked Robotic System for Disaster Mitigation, -Navigation System Based on 3D Geometry Acquisition. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–15 October 2006. [Google Scholar]
  20. Vincent, I. Control Algorithms for a Shape-shifting Tracked Robotic Vehicle Climbing Obstacles; Technical Report; Defense Research and Development-Suffield: Suffield, AB, Canada, 2008. [Google Scholar]
  21. Brik, A.; Kenn, H. A rescue robot control architecture ensuring safe semi-autonomous operation. In RoboCup 2002: Robot Soccer World Cup VI, 2003rd ed.; KaminKa, G.A., Lima, P.U., Rojas, R., Eds.; Springer: Berlin/Heidelberg, Germany, 2003; Volume LNAI2752, pp. 254–262. [Google Scholar]
  22. Marques, C.F.; Cris’ovao, J.; Alvito, P.; Lima, P.U.; Frazao, J.; Ribeiro, M.I.; Ventura, R.M.M. A search and rescue robot with teleoperated tether docking system. Ind. Robot. 2007, 34, 332–338. [Google Scholar] [CrossRef] [Green Version]
  23. Cavallin, K.; Sevensson, P. Semi-Autonomous, Teleoperated Search and Rescue Robot. Master’s Thesis, UMEA University, Umeå, Sweden, 2009. [Google Scholar]
Figure 1. Concept design of the overall system configuration.
Figure 1. Concept design of the overall system configuration.
Machines 11 00329 g001
Figure 2. Overall block diagram and roles among the components.
Figure 2. Overall block diagram and roles among the components.
Machines 11 00329 g002
Figure 3. Thermal IR sensor arrangement.
Figure 3. Thermal IR sensor arrangement.
Machines 11 00329 g003
Figure 4. Geometry for detecting the position of a human survivor.
Figure 4. Geometry for detecting the position of a human survivor.
Machines 11 00329 g004
Figure 5. Configuration of obstacle detection using ultrasonic sensors.
Figure 5. Configuration of obstacle detection using ultrasonic sensors.
Machines 11 00329 g005
Figure 6. Geometry configuration for detecting the tilting angle, θ, of a storage box.
Figure 6. Geometry configuration for detecting the tilting angle, θ, of a storage box.
Machines 11 00329 g006
Figure 7. Definition of target trajectory, p(t), for robot heading.
Figure 7. Definition of target trajectory, p(t), for robot heading.
Machines 11 00329 g007
Figure 8. Mobile robot kinematics model with two caterpillars for local coordinates (x, y).
Figure 8. Mobile robot kinematics model with two caterpillars for local coordinates (x, y).
Machines 11 00329 g008
Figure 9. Simplified geometry to derive the stroke of the linear motor for the angle θ.
Figure 9. Simplified geometry to derive the stroke of the linear motor for the angle θ.
Machines 11 00329 g009
Figure 10. Block diagram of the robot motion control algorithm to reach human survivors.
Figure 10. Block diagram of the robot motion control algorithm to reach human survivors.
Machines 11 00329 g010
Figure 11. Speed control block diagram for caterpillars.
Figure 11. Speed control block diagram for caterpillars.
Machines 11 00329 g011
Figure 12. Position control block diagram for a linear actuator.
Figure 12. Position control block diagram for a linear actuator.
Machines 11 00329 g012
Figure 13. The prototype robot for the experiment.
Figure 13. The prototype robot for the experiment.
Machines 11 00329 g013
Figure 14. Video clip capture images during the experiments; Case I: clip order: (af).
Figure 14. Video clip capture images during the experiments; Case I: clip order: (af).
Machines 11 00329 g014
Figure 15. Experiential results of Case I: target trajectory vs. resulting trajectory and target orientation vs. resulting orientation.
Figure 15. Experiential results of Case I: target trajectory vs. resulting trajectory and target orientation vs. resulting orientation.
Machines 11 00329 g015
Figure 16. Experiential results of Case I: motor speed commands vs. resulting outputs.
Figure 16. Experiential results of Case I: motor speed commands vs. resulting outputs.
Machines 11 00329 g016
Figure 17. Experiential results of Case I: storage box tilt angle and linear motor stroke command vs. resulting output.
Figure 17. Experiential results of Case I: storage box tilt angle and linear motor stroke command vs. resulting output.
Machines 11 00329 g017
Figure 18. Video clip capture images during the experiments; Case II: clip order: (ad).
Figure 18. Video clip capture images during the experiments; Case II: clip order: (ad).
Machines 11 00329 g018
Figure 19. Experiential results of Case II: target trajectory vs. resulting trajectory and target orientation vs. resulting orientation.
Figure 19. Experiential results of Case II: target trajectory vs. resulting trajectory and target orientation vs. resulting orientation.
Machines 11 00329 g019
Figure 20. Experiential results of Case II: motor speed commands vs. resulting outputs.
Figure 20. Experiential results of Case II: motor speed commands vs. resulting outputs.
Machines 11 00329 g020
Table 1. Characteristics of sensors used for the prototype robot.
Table 1. Characteristics of sensors used for the prototype robot.
SensorsItemSpecifications
Thermal IR sensor Array, Adafruit’s AMG8833
(New York, NY, USA)
No. of pixel 64   ( 8 × 8 matrix)
Frame rates10 [frames/s] (max.)
Measurement resolution0.25 [℃ ]
Measurement range0~80 [℃ ]
accuracy ± 2.5 [℃ ]
Viewing angle (vertically and horizontally) ± 30 [deg.]
Human detection distance7 [m] (max.)
Ultrasonic sensor, Kuongshun’s HC-SR04 (Shenzhen, China)Max. range4 [m]
Min. range ± 2, ± 4, ± 8, ± 16 [g]
Output resolution for all g10 [bits]
Typical sensitivity256 [LSB/g]
Accelerometer, Analog Devices’ ADXL345 (Wilmington, MA, USA)No. of measurement axes3
Measurement range2
Measurement angle15 [deg.]
Working frequency40 [Hz]
Table 2. Additional parameter value for controller design and experiment.
Table 2. Additional parameter value for controller design and experiment.
ItemSymbolValue
Trajectory time constantτ2
Robot widthL0.3 [m]
Required robot speedv0.5 [m/s]
Radius of rotation of caterpillarr0.03 [m]
Length of linear motor @ 0 strokes0.15 [m]
Length of storage boxa0.13 [m]
Height between storage box and robot basel0.075 [m]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kim, T.H.; Bae, S.H.; Han, C.H.; Hahn, B. The Design of a Low-Cost Sensing and Control Architecture for a Search and Rescue Assistant Robot. Machines 2023, 11, 329. https://doi.org/10.3390/machines11030329

AMA Style

Kim TH, Bae SH, Han CH, Hahn B. The Design of a Low-Cost Sensing and Control Architecture for a Search and Rescue Assistant Robot. Machines. 2023; 11(3):329. https://doi.org/10.3390/machines11030329

Chicago/Turabian Style

Kim, Tae Ho, Sang Ho Bae, Chang Hun Han, and Bongsu Hahn. 2023. "The Design of a Low-Cost Sensing and Control Architecture for a Search and Rescue Assistant Robot" Machines 11, no. 3: 329. https://doi.org/10.3390/machines11030329

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop