Next Article in Journal
Enhancing Unmanned Marine Vehicle Security: A Periodic Watermark-Based Detection of Replay Attacks
Previous Article in Journal
Comparative Analysis of Measurement Tools in the Cognex D900 Vision System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Influence of Camera Placement on UGV Teleoperation Efficiency in Complex Terrain

Faculty of Mechanical Engineering, Military University of Technology, 00-908 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(18), 8297; https://doi.org/10.3390/app14188297
Submission received: 14 August 2024 / Revised: 9 September 2024 / Accepted: 12 September 2024 / Published: 14 September 2024
(This article belongs to the Section Robotics and Automation)

Abstract

:
Many fields, where human health and life are at risk, are increasingly utilizing mobile robots and UGVs (Unmanned Ground Vehicles). They typically operate in teleoperation mode (control based on the projected image, outside the operator’s direct field of view), as autonomy is not yet sufficiently developed and key decisions should be made by the man. Fast and effective decision making requires a high level of situational and action awareness. It relies primarily on visualizing the robot’s surroundings and end effectors using cameras and displays. This study aims to compare the effectiveness of three solutions of robot area imaging systems using the simultaneous transmission of images from three cameras while driving a UGV in complex terrain.

1. Introduction

In many fields, using robots or UGVs can increase safety and reduce the risk of health or life loss. These fields include mining, the operation of nuclear and chemical plants, various military operations, and disaster areas, particularly search and rescue operations [1,2,3,4,5,6,7,8,9,10,11,12,13,14]. The presented functionality necessitates that the UGV effectively perform on-road driving vehicle tasks (maneuvering in a limited space along a narrow strip of the road) and search and detection missions (moving between randomly placed obstacles and finding a way). These types of tasks most often require direct operator control over the platform. Therefore, the platform operates in teleoperation mode [15,16,17,18,19]. Control in the teleoperation system significantly affects the efficiency of tasks performed by the platform. Efficiency should be understood as the speed and precision of tasks performed by these platforms as well as the ability to detect and identify obstacles. In the teleoperation system, the key factor in this respect is the vision system, which directly affects the operator’s situational awareness [20,21,22,23]. Currently, additional systems are also used to support the operator in carrying out such road tasks and search and detection missions [24,25,26,27]. However, in situations that interfere with their proper operation, it is still necessary to ensure the situational awareness of the operator controlling the teleoperation system. For this reason, in the article, the authors limited the scope of research to the study of the imaging system.
Adapting the surrounding imaging system for teleoperation to human perception and enabling human eye observations is crucial. Taylor (1973) [28], Howard (1995) [29], Costella (1995) [30], and Strasburger et al. (2011) [31] have distinguished the following in the field of human vision:
-
The symbol recognition zone has a width of 10–30°.
-
The color recognition zone spans from 30 to 60° in width.
-
Peripheral vision zone with a horizontal width of 180–200° where only movement and brightness changes are noted.
The comfort zone of observation is commonly referred to as the color recognition zone, typically between 50 and 60° for an average person. For a good spatial orientation, an angle of 180–200° is desirable as it facilitates the easy creation of mental maps.
Glumm et al. (1992) [32] addressed the problem of selecting the field of view (FOV) of cameras. They studied the effectiveness of vehicle control using cameras with a horizontal field of view of 29° (vertical FOV 22°), 55° (vertical FOV 43°), and 94° (vertical FOV 75°). The researchers fixed cameras over the rear of the vehicle to observe the horizon, a reference point on the strip, and the vehicle hood. The highest speeds and the best handling accuracy were obtained for a lens with a horizontal field of view of 55°. Using a wider angle of view yielded slightly better results when avoiding obstacles. Significantly worse results were recorded in this case for the camera with the smallest field of view. In addition, the research has shown that it is advisable to ensure visibility of the vehicle’s front edge. A long focal length lens with a narrow field of view failed to provide this visibility, significantly worsening the vehicle’s surrounding awareness. Van Erp and Padmos (2003) [33] obtained similar results.
Glumm et al. (1997) [34] presented other studies on the influence of camera position on the efficiency of UGV control. The teleoperation system uses a camera with a horizontal field of view of 55° in three positions. The shortest driving times were provided by the 15° forward-inclined camera, which gives the best visibility of the vehicle’s front and front-wheel surroundings.
Similar results were achieved by Scribner and Gombash (1998) [35] while driving in rough terrain. The study was conducted on a teleoperated UGV equipped with three panoramic cameras with a horizontal FOV of 55° each. The image was displayed on three monitors with a total viewing angle of about 165°. Wide panoramic field of view in path-following driving tasks was not very useful, but in open-ended navigational tasks, using waypoints and terrain recognition proved to be a significant factor in reducing error rate.
Shinoda et al. (1999) [36] also dealt with the problems of UGV teleoperations in the field. The research has shown that a system with a main front camera and additional cameras covering the vehicle sides and running gear is a recommended solution when overcoming obstacles because it allows for more precise control and reduces driver stress, as well as the necessary concentration of the driver.
Shiroma et al. (2004) [37] studied the impact of camera selection on the effectiveness of rescue robots’ work. The robot’s task was to move in a complex area without losing orientation and position awareness. They confirmed that to control the UGV efficiently, it is necessary to have a clear image of the surrounding area and obstacles close to the vehicle. Furthermore, studies have demonstrated the limitations of using fish-eye vision. Since it has a wide-angle FOV, which is good for obtaining the surrounding information around the robot, the deformation of perspective and limited resolution of the image enable it to recognize objects at a greater distance. This was confirmed by Costella (1995) [30]. For this reason, it is not suitable for the search and detection mission.
The works of Delong et al. (2004) [38] and Scholtz (2002) [39] confirm that a lack of situation awareness correlates with poor operator performance. The use of a single standard video camera with a relatively narrow FOV can lead to the phenomenon known as “soda straw” or “keyhole effect”. The “soda straw” or “keyhole effect” can lead to issues with landmark recognition (Thomas and Wickens (2000) [40], distance and depth judgments (Witmer and Sadowski (1998) [41], and an impaired ability to detect targets or obstacles (Van Erp and Padmos (2003) [33]. According to Scribner and Gombash (1998) [35], wider FOVs are useful for maintaining spatial orientation concerning terrain features, ease of operation in off-road driving tasks, and performance in unfamiliar terrain. Moreover, an increase in the number of cameras aided operators in achieving higher efficiency of navigation, according to Voshell et al. (2005) [42].
A similar problem was discussed by Keys et al. (2006) [43]. They studied the effectiveness of teleoperations in urban search and rescue missions.
They found that the overhead camera produced a mean time about 10% faster vs. the forward-looking camera, and operators used the overhead camera more than the forward camera. The use of two cameras (front and rear) has not significantly reduced the number of collisions because although both windows are displayed, the user is still focused on the primary video window, missing things in the rear camera view. Simultaneous tracking of images from two cameras showing images from different perspectives was not effective; the operator focused on the image from one camera. The test participants considered the use of PTZ cameras to be a significant advantage, but the disadvantage of the solution was the lack of ability to observe the driving wheels and their immediate surroundings.
Another method of improving situation awareness while reducing video transmission was studied by Fiala (2005) [44]. To improve the efficiency of teleoperations, he used a head-mounted display (HMD) to obtain immersive viewing, which enabled panoramic viewing by turning the operator’s head. A camera with a catadioptric lens provided the panoramic image (360°), which, after transformation into a six-side cube format, was sent to the HMD based on the operator’s head position. The system provided low latency in the image acquisition system, but the low effective image resolution of 320 × 240 pixels limited the ability to recognize details, and the small FOV of glasses (HMD) worsened the situation of awareness and turned out to be ineffective.
Brayda et al. (2009) [45] conducted a study on the ability to follow difficult paths using PT camera settings at low altitude and tilt down 1.5°, at medium altitude and tilt down 29°, and at high altitude with tilt down 45°. As a result of the research, it has been found that the higher position of the camera and its greater inclination deteriorate the perception of the perspective and extend the task completion time by about 20%. The possibility of camera control hurts the amount of time needed to complete the task.
Yamauchi and Massey (2008) [46] have studied a system of active PTZ camera control with head movements and HMD display. During the search and detection tasks, the operators felt very comfortable, and, depending on the difficulty of the task, the system improved mission performance between 200% and 400%. But quick slalom tests showed that for experienced operators using fixed cameras, the driving times were reduced by approx. 5%. The system enhanced the performance of only inexperienced operators. Moreover, in the course of research, it was found that to improve situation awareness, it is advisable to use additional cameras, which will provide a panoramic view in the window at the top of the screen.
Similar effects of limited usability of PT cameras (Pan-Tilt) have been observed by Doisy et al. (2017) [47]. The study revealed a decrease in the use of pan-tilt mechanisms as operators gained experience.
Johnson et al. (2015) [48] studied the impact of increasing the viewing angle on the effectiveness of teleportation with limited video transmission. The robot was equipped with two cameras. The first camera, pointing forward and mounted vertically, provided a horizontal field of view (FOV) with a width of 45°. Equipped with a catadioptric lens and a curved mirror, the second camera captured a full 360° view around the robot, employing techniques to unfold the video from the lens and improve the main camera’s displayed view. The research revealed a 30% reduction in task completion time when using a wide-angle (180°) and panoramic (360°) view. Furthermore, in the case of a wide-angle view, the number of errors decreased almost four times, whereas in the case of a panoramic view (360°), it decreased only 2.5 times. We found that a wider FOV enhances situation awareness by forming mental maps. Moreover, participants found the panoramic interface (360°) to be significantly more difficult to use than the wide-angle interface (180°).
Adamides et al. (2017) [49] investigated the effectiveness of the HRI system (Human Robot Interaction) of a robot used for grape cultivation in a teleoperation system. It was discovered that using a PC screen instead of a head-mounted display (HMD) increases the comfort of use and reduces the operator’s effort by about 10%, despite the task’s relatively short duration. Furthermore, the interface with multiple views improves the robot’s efficiency—it reduces the number of collisions and increases the efficiency of detection and spraying clusters. Using a single camera in the HRI system, situational awareness is significantly limited; it does not allow for precise determination of the location of obstacles or the observation and recognition of the surrounding environment. Moreover, it is advisable to use a separate camera to search for objects and carry out technological tasks. This is supported by the research results presented in Hughes et al. (2003) [50], where, through simulation studies, it was demonstrated that it is justified to use not only a fixed driving camera but also an additional PTZ camera to perform search and detect tasks. This is in line with the work of Shiroma et al. (2004) [37] and Voshel et al. (2005) [42], which indicated that augmenting video by increasing FOV cameras, using panoramic views, and zoom/pan capabilities improves operator efficiency and awareness.
The presented studies have shown that for uncomplicated tasks like driving a vehicle on the road, a relatively narrow field of view of 50–60° is sufficient. For the effective execution of more complex tasks, such as off-road or complex spatial environment rides, it is necessary to utilize a wide field of view that extends to 180°. This allows for the formation of mental maps and better situational awareness. The central part of the FOV should be characterized by high resolution, while the peripheral view may have a lower resolution. This solution is insufficient for the implementation of search and detection missions.
Providing high resolution throughout the entire wide FOV does not increase the operator’s load or extend the task’s execution time. The interface with a panoramic view of the FOV 360° does not improve situational awareness and is not very intuitive for the user.
HMD is uncomfortable for long-term missions because it has a limited FOV and can cause motion sickness. In this regard, using LCD screens is more advantageous.
PTZ cameras are useful in search and detection missions, as well as in controlling manipulators and end effectors—in this case, they provide action awareness. However, their use to visualize the surroundings while driving and building situation awareness is problematic because the lack of correlation between the optical axis of the camera and the longitudinal axis of the vehicle can lead to serious collisions. For UGV driving, fixed cameras with a horizontal view are a much better solution.
In summary, the presented literature guides how to construct a vision system for the UGV teleoperation system to improve the operator’s situational awareness. Based on the literature, it can also be concluded that the implementation of driving vehicles on-road search and detection missions by UGVs significantly depends on the configuration of the vision system. However, it is not possible to determine which camera configuration is best in terms of the UGV teleoperation system’s efficiency. Therefore, the work involved conducting field tests to assess the effectiveness of the teleoperation system, taking into account the configuration of the UGV vision system during vehicle driving and search and detection missions.
The remainder of this paper is organized as follows: Section 2 presents the research methodology, experimental setting and procedure, and indicators of evaluation of conducted tests. In Section 3, the experimental results are analyzed to determine the effectiveness of the UGV teleoperation system for three imaging system configurations. Finally, conclusions and suggestions for future work are discussed in Section 4.

2. Materials and Methods

2.1. Experimental Design

The purpose of the test was to compare the effectiveness of UGV performance using two different complex trials with three alternative camera placement configurations (Figure 1). In each configuration, we used a horizontal FOV of approximately 58° for all three cameras to ensure the maximum viewing angle, which, according to Taylor (1973) [28], allows for the recognition of objects or details. The number of cameras has been limited to 3 due to the limited bandwidth of the video link.
The first of the camera configurations corresponds to the typical, standard camera setting used in light robots. The main camera is positioned in the front UGV, while the side additional cameras enable control over selected zones in the robot’s immediate surroundings, particularly those near the running gear. Horizontal positioning of the main camera provides the opportunity to observe not only the road (ground) but also buildings and other high objects, which helps to improve situational awareness. Side cameras have been designed to improve the ability to avoid obstacles. They are directed to the front, which in a natural way widened the field of view of the main camera. The side cameras are tilted down by 20° to better align their field of view (FOV) with the observation area, especially when obstacles are avoided.
The second configuration corresponds to the operator’s field of view from within the vehicle. The cameras are arranged in a horizontal configuration, resulting in a panoramic image that has a field of view (FOV) of 174° horizontally and 45° vertically. This design enables lateral viewing and facilitates the construction of cognitive maps. The setup should allow for a comparison of the teleoperation efficiency of the regular method with a panoramic wide field of view (FOV) solution that incorporates a car hood, thereby greatly improving situational awareness.
The third configuration has a main camera placed centrally in the driver’s seat and two side cameras, similar to the first configuration, improving obstacle avoidance. This allows for a direct comparison of the effectiveness of UGV teleoperation with the camera in front of and centrally over the vehicle.
The system with the camera in front of the UGV provides visibility up to 2 meters ahead of the vehicle, while the centrally placed camera provides visibility up to 3 meters ahead of the vehicle.

2.2. Experimental Setting and User Task

The participant’s task was to drive through two test tracks in the possible shortest time, reflecting the conditions for execution of tasks representative for driving in search and detect missions as well as search and rescue missions and requiring from the operator high accuracy of UGV management and high situation awareness.
Trial 1 (Figure 2a) was designed to check the effectiveness of alternative imaging systems when driving UGV along a relatively straight and narrow track with a width of 125% of the vehicle’s width, on which two tight corners with a radius of 4 m and a turning angle of 90° were placed. The turning radius was comparable to a vehicle’s minimum turning radius. The track’s total length was 50 m. The track was determined by relatively low and poorly visible markers: hexagonal concrete paving blocks with a height of 15 cm and a width of 35 cm, set every 3 m on unpaved terrain partially overgrown with grass. Based on trial 1, the platform’s ability to move on the road was assessed, and three tasks were distinguished:
  • Task 1 involves assessing the distance and speed that the platform can travel before changing its direction of movement to reach an obstacle or turn.
  • Task 2 involves assessing the orientation of obstacles on the road, which in turn influences the implementation of platform turning maneuvers for overcoming obstacles (turns).
  • Task 3 involves evaluating the platform’s ability to drive freely without the need for maneuvers (departing to the open space).
The aim of trial 2 (Figure 2b) was to check the effectiveness of imaging systems in more complex situations, consisting of a need to detect and recognize obstacles on the road and avoid them on the way to destination. The test track with a length of 95 m (Figure 2b) consisted of straight sections with a width of 125% of the vehicle, turns with a radius of 4 m and 8 m, and two areas with dimensions of 6 × 15 m and 10 × 20 m, where obstacles were set up. Obstacle settings were randomly changed; each driveway was different, but in such a way that the vehicle was able to avoid them. There were three obstacles in the first area and two obstacles in the second area. In accordance with test assumptions, the following objects were used as obstacles that were not very noticeable: a metal rod 1.6 m long and 20 mm in diameter, two wooden boxes of 17 × 35 × 45 cm (with and without inscriptions) and two cardboard box dimensions of 20 × 70 × 130 cm and 50 × 60 × 60 cm. As in the previous case, the track was determined using hexagonal concrete paving blocks.
An example view from the conducted research is presented in Figure 3.

2.3. Participants

The study involved 20 participants aged 24–34 years. All participants held a driving license and had experience in driving. They were selected from among university employees and PhD students.

2.4. Used UGV and Interfaces

The imaging systems were tested on a remote-controlled Kawasaki Mule 4010 vehicle (Kobe, Japan) with a maximum off-road speed of 20 km/h and an Ackerman steering system (Figure 4a). The UGV was equipped with cameras, fitted in such a way that three configurations could be obtained as shown in Figure 1. The side cameras were angled at a 20° downward angle. A special video link enabled simultaneous transmission of three PAL standard images with a resolution of 768 × 576 pixels, delays exceeding 120 ms, and a frequency of 30 fps. The image was displayed on three color 17” LCD monitors (Samsung, Suwon, Republic of Korea) set up in a panoramic position on a specially designed control station (Figure 4b). For intuitive and easy control, it was equipped with a steering wheel, accelerator, and brake pedals.

2.5. Experimental Procedure

Before initiating the target tests, all participants familiarize themselves with the specifics of the remote control, the occurrence of latency in the control system, and the test tracks. To achieve this, all participants took 10 rides on the first and second tracks. During the rides, they controlled the vehicle from a portable stand, walking next to it and observing its surroundings directly.
The goal of the first phase of tests was to determine the reference drive time without using an imaging system. The operators covered both tracks 10 times, walking alongside the UGV each time. The obstacle setting on track 2 was different for each ride but the same for all participants.
In the second phase, the effectiveness of the imaging system was tested. For this purpose, the control station was placed outside the line of sight. The operators covered track 1 10 times successively, and after all participants completed their rides, the camera configuration was changed. Following the completion of track 1, the operators proceeded to cover track 2 in a similar manner to track 1. After completing track 2, each time the operators filled in a questionnaire concerning detected and recognized obstacles and their location.

2.6. Indicators of Evaluation of Conducted Tests

The time spent driving was the basic parameter proving the effectiveness of teleoperations and measured during tests. In both the first and second tests, the total time to complete the track was measured. Additionally, for track 1, the times to complete tasks 1, 2, and 3 were also measured. The number of lane violations was counted in test 1, except for the times. In test 2, in addition to counting lane violations, we also counted the number of obstacles that were hit and the number of obstacles that went undetected or unrecognized.

3. Results and Discussion

The Shapiro–Wilk test was used to check the normality of the distribution. The distributions of the following parameters were analyzed:
-
The total time to complete trial 1 (t1);
-
The time to complete task 1 (t1_1);
-
The time to complete task 2(t1_2);
-
The time to complete task 3 (t1_3);
-
The total time to complete trial 2 (t2);
-
The number of lane violations on trial 1 (n1);
-
The number of lane violations on trial 2 (n2);
-
The number of hitting obstacles on trial 2 (n2_1);
-
The number of undetected obstacles on trial 2 (n2_2);
-
The number of unrecognized obstacles on trial 2 (n2_3).
In the first test (trial 1), normality of the distribution analysis was performed for three camera configurations and direct observation. For the second test (trial 2), normality of the distribution analysis was performed only for three camera configurations. The significance level α = 0.05 was adopted. For 20 people tested according to the data in the tables of coefficients for the Shapiro–Wilk test, the critical value is W ( 0.05 ,   20 ) = 0.905 . The following results were obtained in Table 1, where W—Shapiro-Wilk test indicator, p—the probability of occurrence of the assumed type I error [51].
The results of the normality of the distribution analysis (Table 1) indicate that not all parameters are characterized by a normal distribution (W < 0.905). Therefore, two separate statistical analyses were performed:
-
ANOVA [52]—for parameters characterized by a normal distribution (t1; t1_1; t1_2; t1_3; t2), to demonstrate sufficient test power to demonstrate a relationship between camera configurations and complete trial time;
-
Kruskal–Wallis [53]—for parameters that do not have a normal distribution (n1; n2; n2_1; n2_2; n2_3), to check the significance of the relationship between the number of errors made and the camera configurations. The critical value of the Kruskal-Wallis test—H = 5.9914, for α = 0.05 and df = 2 (the degrees of freedom is number of groups minus one).
Based on the conducted ANOVA analysis, it was found that the number of trials is sufficient to demonstrate with a test power of over 80% that the time to complete a trial depends on the camera configurations:
-
For trial 1—t1 (α = 0.05 and RMSSE (Root Mean Square Standardized Effect) = 0.161698);
-
For task 1—t1_1 (α = 0.05 and RMSSE = 0.159098);
-
For task 2—t1_2 (α = 0.05 and RMSSE = 0.293171);
-
For task 3—t1_2 (α = 0.05 and RMSSE = 0.465155);
-
For trial 2—t2 (α = 0.05 and RMSSE = 0.699696).
Kruskal–Wallis analysis showed that there is a statistically significant relationship between camera configuration and the following:
-
The number of lane violations on trial 1—n1 (α = 0.05; df = 2; H = 12.33742; p = 0.0021)
-
The number of lane violations on trial 2—n2 (α = 0.05; df = 2; H = 6.261478; p = 0.0058);
-
The number of hitting obstacles on trial 2—n2_1 (α = 0.05; df = 2; H = 47.59081; p = 0.00001)
-
The number of undetected obstacles on trial 2—n2_2 (α = 0.05; df = 2; H = 96.78267; p = 0.00001)
-
The number of unrecognized obstacles on trial 2—n2_3 (α = 0.05; df = 2; H = 95.40297; p = 0.00001)
-
The data number for the Kruskal–Wallis analysis—N was 600.
The test results of the times needed to complete trial 1 and the tasks performed within trial 1 (tasks 1–3) depending on the camera configurations and for the direct observation surrounding are presented in Figure 5, Figure 6, Figure 7 and Figure 8. They used a box plot to show the distribution of a set of data. In a box plot, numerical data are divided into quartiles, and a box is drawn between the first and third quartiles, with an additional line drawn along the second quartile to mark the median. The minimums and maximums outside the first and third quartiles are depicted with lines, which are often called whiskers. The mean value is marked by “x” symbol.
By analyzing the results obtained during tests on the first track (trial 1), it can be stated that during the ride on a narrow track there were no significant differences (under 10%) between solutions of camera configurations.
The shortest mean time duration was achieved in configuration 3, which was 92 s, but it was only 4% shorter than configuration 2 and 7% shorter than configuration 1 (Figure 5).
It should be noted that the average time needed to complete test 1 for tested configurations in comparison to the time of the reference test with direct observation of the surrounding area (52 s) increased by 90% (configuration 1), 84% (configuration 2), and 77% (configuration 3).
The average time of task 1 execution (Figure 6) was 38 s (configuration 1), 39 s (configuration 2), and 37 s (configuration 3). The differences in the values did not exceed 5%. The time of task 1 execution for direct observation of the surrounding area was 46–51% shorter in comparison to the tested camera configurations.
When cornering (task 2), again the shortest average travel time (Figure 7) was achieved for configuration 3 (26 s). In task 2, the differences in drive times between configurations are significant and exceed 20%. Task 2 execution time for direct observation of the surrounding area was 46–56% shorter than the tested camera configurations.
When departing to an open space (task 3), the shortest average travel time (Figure 8) was obtained for configuration 2 (25 s), which in previous tasks turned out to be the worst. In task 2, the differences in average drive times between configurations are significant and reach almost 30%. Task 3 execution time for direct observation of the surrounding area was 24–41% shorter than the tested camera configurations.
The number of lane violations per ride (control errors) shows that there are much bigger differences in how well teleoperations work for different camera configurations. It clearly indicates that the best solution is configuration 3 (Figure 9). In this case, the average number of errors per ride (0.26 error per ride) is 60% lower than in the worst solution, which is configuration 2. Compared to configuration 2, configuration 1 reduces the number of lane violations by only 32%. It should be noted that the vast majority of lane violations in all configurations occurred during tight corners.
Analyzing travel times, it can be concluded that configuration 2 enabled the fastest travel while driving without the need to precisely avoid obstacles (task 3). Task 3’s execution time was only 32% longer than during direct observation of the surrounding area. However, during precise navigation between obstacles (task 2), configuration 2 turned out to be less effective.
A comparison between the average execution time of task 1 and task 3 (20 m long section) reveals a significant increase in task 1’s duration compared to task 3 (19% for configuration 1, 56% for configuration 2, and 24% for configuration 3). This means that teleoperation efficiency is relatively low during tasks that require precise assessment of UGV position and obstacles. Configuration 3 significantly outweighs the other solutions in terms of the number of lane violations and the ability to control UGV in a limited space evaluation.
Trial 2 also consisted of maneuvering in limited space. However, it had a differentiated cornering radius and relatively open spaces, providing greater maneuverability when avoiding obstacles. It required the operator to possess not only good spatial orientation but also the ability to detect, recognize, and avoid obstacles.
Trial 2’s results compared to trial 1 show significantly greater differences in travel times depending on the camera configuration. The shortest average travel time was obtained for configuration 3 (422 s), and it was shorter than the time obtained for configuration 2 by 16% and 23% shorter than the time obtained for configuration 1 (Figure 10). It should be noted that all operators obtained the best results in configuration 3 and the worst results in configuration 1. However, the complexity of the task caused a significant increase in the average travel time in comparison to the case of direct observation; for configuration 3, the time extension was 150%, for configuration 2, it was 197%, and for configuration 1, it was 226%.
When analyzing the ability to carry out the detection and recognition of obstacle tasks, the test indicates a significant variation in results depending on camera configuration in teleported systems (Figure 11).
The average of undetected obstacles on one pass was 0.014 for configuration 3, 0.14 for configuration 2, and 1.2 for configuration 1. Configuration 3 also provided the best results in terms of obstacle recognition. In this case, configuration 1 is the worst. The average number of unrecognized obstacles per pass is 22 times higher than for the best configuration (configuration 3).
The number of obstacles hit can be used as an indicator of the ability to construct a mental map. From this point of view, configuration 3 proved to be the most effective (Figure 12); only 0.2 contacts with obstacles per ride were recorded. In configuration 1, there were 80% more, and in configuration 2, there was as much as 420% more.
A comparison of the number of lane violations (Figure 12) shows that configuration 1 turned out to be the most effective—on average 0.15 violations during one ride. The increased number of lane violations in configuration 3 (0.32 violations per ride) probably resulted from greater operator confidence and execution of the task at much higher speeds.

4. Conclusions

The tests conducted in trial 1 did not show significant differences in the effectiveness of tested solutions in reaching the obstacle time and cornering time, but confirmed that for driving without intensive maneuvering and precise obstacle avoidance, the best solution is a panoramic 180° view (configuration 2). Cameras in this case should be placed in such a way that they emulate the view from the driver’s cockpit, capturing a fragment of the vehicle hood.
When driving in a very limited space, the best solution is a main camera located in the driver’s cockpit, which shows the front view with the vehicle hood and works in conjunction with additional side cameras (configuration 3). It is needed to precisely assess the position of obstacles about the UGV chassis and to analyze the UGV surrounding area. Such a solution provides the shortest time of operation and the lowest number of errors. The main camera’s placement in front of the UGV (configuration 2) produces dramatic, worse results.
The solution, which combines a main camera in the driver’s cockpit with additional side cameras, offers the best balance between task completion time and the number of errors, as well as the detection and recognition of obstacles. This solution facilitated the nearly flawless execution of planned tasks, allowing a relatively large UGV equipped with Ackerman’s steering system to maneuver in tight spaces. This creates the possibility of a relatively simple adaptation to teleoperations of many specialist vehicles used in rescue or military operations.
The applied approach does not exclude the possibility of additional use of environment recognition systems based on camera images, which will be the subject of further research.

Author Contributions

Conceptualization, K.C., P.K. and R.K.T.; methodology, T.M. and R.K.T.; software, K.C. and P.K.; validation, K.C. and M.P.; formal analysis, P.K. and R.K.T.; investigation, K.C. and R.K.T.; resources, T.M. and M.P.; data curation, R.K.T. and A.R.; writing—original draft preparation, P.K. and M.P.; writing—review and editing, P.K. and T.M.; visualization, K.C. and M.P.; supervision, M.P. and R.K.T.; project administration, T.M. and R.K.T.; funding acquisition, T.M. and A.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was financed/co-financed by the Military University of Technology under research project UGB 708/2024.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Ethics Committee operating at Warsaw University of Life Science, resolution 15 February 2022, number 7/2022.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The original contributions presented in the study are included in the article; further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kułakowski, K.; Kułakowski, P.; Klich, A.; Porzuczek, J. Application of the Infrared Thermography and Unmanned Ground Vehicle for Rescue Action Support in Underground Mine—The AMICOS Project. Remote Sens. 2022, 14, 2134. [Google Scholar] [CrossRef]
  2. Wojtusiak, M.; Lenda, G.; Szczepański, M.; Mrówka, M. Configurations and Applications of Multi-Agent Hybrid Drone/Unmanned Ground Vehicle for Underground Environments: A Review. Drones 2023, 7, 136. [Google Scholar] [CrossRef]
  3. Wu, L.; Wang, C.; Zhang, X. Robotic Systems for Underground Mining: Safety and Efficiency. Int. J. Min. Sci. Technol. 2017, 27, 855–860. [Google Scholar] [CrossRef]
  4. Gao, S.; Shao, W.; Zhu, C.; Ke, J.; Cui, J.; Qu, D. Development of an Unmanned Surface Vehicle for the Emergency Response Mission of the ‘Sanchi’ Oil Tanker Collision and Explosion Accident. Appl. Sci. 2020, 10, 2704. [Google Scholar] [CrossRef]
  5. Li, Q.; Zhang, Y.; Wang, J.; Yu, Z.; Xie, Z.; Meng, X. Study on the Autonomous Recovery System of a Field-Exploration Unmanned Surface Vehicle. Sensors 2020, 20, 3207. [Google Scholar] [CrossRef]
  6. Smith, J.R.; Chen, H.; Johnson, L. Autonomous Navigation of Unmanned Ground Vehicles in Urban Military Operations. IEEE Trans. Rob. 2021, 37, 905–917. [Google Scholar]
  7. Williams, D.K.; Garcia, M.J.; Thompson, P.A. Development and Deployment of Unmanned Ground Vehicles for Defense Applications. J. Def. Technol. 2022, 18, 205–220. [Google Scholar]
  8. Patel, R.; Kumar, S.; Desai, T. Challenges and Opportunities in UGV Applications for Military Operations. J. Mil. Sci. 2023, 12, 45–59. [Google Scholar]
  9. Mahmud, M.S.; Dawood, A.; Goher, K.M.; Bai, X.; Morshed, N.M.; Bakar, A.A.; Boon, S.W. ResQbot 2.0: An Improved Design of a Mobile Rescue Robot with an Inflatable Neck Securing Device for Safe Casualty Extraction. Appl. Sci. 2024, 14, 4517. [Google Scholar]
  10. Smith, J.; Johnson, R.; Lee, T. Unmanned Ground Vehicles for Search and Rescue Operations: A Review. J. Rob. Autom. 2023, 15, 567–589. [Google Scholar]
  11. Brown, K.; Wang, L.; Patel, A. Design and Implementation of Autonomous Ground Robots for Emergency Response. Int. J. Rob. Res. 2022, 41, 210–225. [Google Scholar]
  12. Anderson, M.; Kumar, S.; Zhang, Y. Challenges and Opportunities in Using Unmanned Ground Vehicles for Disaster Response. IEEE Trans. Rob. 2021, 37, 1234–1245. [Google Scholar]
  13. Williams, P.; Martinez, R.; Tanaka, H. Autonomous Robotic Systems for Search and Rescue in Urban Environments. Adv. Intell. Syst. 2024, 6, 89–101. [Google Scholar]
  14. Krogul, P.; Cieślik, K.; Łopatka, M.J.; Przybysz, M.; Rubiec, A.; Muszyński, T.; Rykała, Ł.; Typiak, R. Experimental Research on the Influence of Size Ratio on the Effector Movement of the Manipulator with a Large Working Area. Appl. Sci. 2023, 13, 8908. [Google Scholar] [CrossRef]
  15. Łopatka, M.J.; Krogul, P.; Przybysz, M.; Rubiec, A. Preliminary Experimental Research on the Influence of Counterbalance Valves on the Operation of a Heavy Hydraulic Manipulator during Long-Range Straight-Line Movement. Energies 2022, 15, 5596. [Google Scholar] [CrossRef]
  16. Chen, X.; Liu, Y.; Wang, Z. A Review on Teleoperation Control Systems for Unmanned Vehicles. Sensors 2022, 22, 1398. [Google Scholar] [CrossRef]
  17. Zhang, T.; Li, J.; Zhao, Q. Teleoperation Control Strategies for Mobile Robots: A Survey. Robotics 2021, 10, 275. [Google Scholar]
  18. Yang, H.; Liu, F.; Xu, J. Design and Implementation of a Teleoperation System for Ground Robots. Appl. Sci. 2023, 13, 1204. [Google Scholar]
  19. Wang, M.; Sun, Y.; Liu, Z. Real-Time Teleoperation Control for Autonomous Vehicles Using Advanced Communication Techniques. Electronics 2023, 12, 2022. [Google Scholar]
  20. Kim, H.; Park, J.; Lee, S. The Impact of Visual Systems on Situational Awareness in Remote Operation Systems. Sensors 2022, 22, 4434. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Liu, W.; Yang, X. Enhancing Operator Situational Awareness with Advanced Vision Systems in Autonomous Vehicles. Robotics 2021, 10, 180. [Google Scholar]
  22. Chen, L.; Wang, J.; Li, Q. Visual Perception and Situational Awareness in Remote Control Systems: A Comprehensive Review. Appl. Sci. 2023, 13, 789. [Google Scholar]
  23. Smith, R.; Brown, T.; Davis, M. The Role of Vision-Based Systems in Improving Situational Awareness for Drone Operators. Electronics 2022, 11, 1234. [Google Scholar] [CrossRef]
  24. Xiang, Y.; Li, D.; Su, T.; Zhou, Q.; Brach, C.; Mao, S.S.; Geimer, M. Where Am I? SLAM for Mobile Machines on a Smart Working Site. Vehicles 2022, 4, 529–552. [Google Scholar] [CrossRef]
  25. Khan, S.; Guivant, J. Design and Implementation of Proximal Planning and Control of an Unmanned Ground Vehicle to Operate in Dynamic Environments. IEEE Trans. Intell. Veh. 2023, 8, 1787–1799. [Google Scholar] [CrossRef]
  26. Zhou, Z.; Wang, Y.; Zhou, G.; Nam, K.; Ji, Z.; Yin, C. A Twisted Gaussian Risk Model Considering Target Vehicle Longitudinal-Lateral Motion States for Host Vehicle Trajectory Planning. IEEE Trans. Intell. Transp. Syst. 2023, 24, 13685–13697. [Google Scholar] [CrossRef]
  27. Ding, F.; Shan, H.; Han, X.; Jiang, C.; Peng, C.; Liu, J. Security-based resilient triggered output feedback lane keeping control for human–machine cooperative steering intelligent heavy truck under Denial-of-Service attacks. IEEE Trans. Fuzzy Syst. 2023, 31, 2264–2276. [Google Scholar] [CrossRef]
  28. Taylor, J.H. Vision in Bioastronautics Data Book; Scientific and Technical Information Office, NASA: Hampton, VI, USA, 1973; pp. 611–665. [Google Scholar]
  29. Howard, I.P. Binocular Vision and Stereopsis; Oxford University Press: Oxford, UK, 1995. [Google Scholar]
  30. Costella, J.P. A Beginner’s Guide to the Human Field of View; Technical Report; School of Physics, The University of Melbourne: Melbourne, Australia, 1995. [Google Scholar]
  31. Strasburger, H.; Rentschler, I.; Jüttner, M. Peripheral Vision and Pattern Recognition: A Review. J. Vis. 2011, 11, 13. [Google Scholar] [CrossRef]
  32. Glumm, M.M.; Kilduff, P.W.; Masley, A.S. A Study of the Effects of Lens Focal Length on Remote Driver Performance; ARL-TR-25; Army Research Laboratory: Adelphi, MD, USA, 1992. [Google Scholar]
  33. van Erp, J.B.F.; Padmos, P. Image Parameters for Driving with Indirect Viewing Systems. Ergonomics 2003, 46, 1471–1499. [Google Scholar] [CrossRef]
  34. Glumm, M.M.; Kilduff, P.W.; Masley, A.S.; Grynovicki, J.O. An Assessment of Camera Position Options and Their Effects on Remote Driver Performance; ARL-TR-1329; Army Research Laboratory: Adelphi, MD, USA, 1997. [Google Scholar]
  35. Scribner, D.R.; Gombash, J.W. The Effect of Stereoscopic and Wide Field of View Condition on Teleoperator Performance; Technical Report ARL-TR-1598; Army Research Laboratory: Adelphi, MD, USA, 1998. [Google Scholar]
  36. Shinoda, Y.; Niwa, Y.; Kaneko, M. Influence of Camera Position for a Remotely Driven Vehicle—Study of a Rough Terrain Mobile Unmanned Ground Vehicle. Adv. Rob. 1999, 13, 311–312. [Google Scholar] [CrossRef]
  37. Shiroma, N.; Sato, N.; Chiu, Y.; Matsuno, F. Study on Effective Camera Images for Mobile Robot Teleoperation. In Proceedings of the International Workshop on Robot and Human Interactive Communication, Kurashiki, Okayama, Japan, 20–22 September 2004. [Google Scholar]
  38. Delong, B.P.; Colgate, J.E.; Peshkin, M.A. Improving Teleoperation: Reducing Mental Rotations and Translations. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–1 May 2004. [Google Scholar]
  39. Scholtz, J.C. Human-Robot Interaction: Creating Synergistic Cyber Forces. In Multi-Robot Systems: From Swarms to Intelligent Automata; Springer: Boston, MA, USA, 2002; pp. 177–184. [Google Scholar]
  40. Thomas, L.C.; Wickens, C.D. Effects of Display Frames of Reference on Spatial Judgment and Change Detection; Technical Report ARL-00-14/FED-LAB-00-4; U.S. Army Research Laboratory: Adelphi, MD, USA, 2000. [Google Scholar]
  41. Witmer, B.G.; Sadowski, W.I. Nonvisually Guided Locomotion to a Previously Viewed Target in Real and Virtual Environments. Hum. Factors 1998, 40, 478–488. [Google Scholar] [CrossRef]
  42. Voshell, M.; Woods, D.D.; Philips, F. Overcoming the Keyhole in Human-Robot Coordination. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Canberra, Australia, 21–23 November 2005; Sage CA: Los Angeles, CA, USA, 2005; Volume 49, pp. 442–446. [Google Scholar]
  43. Keys, B.; Casey, R.; Yanco, H.A.; Maxwell, B.M. Camera Placement and Multi-Camera Fusion for Remote Robot Operation. In Proceedings of the IEEE International Workshop on Safety, Security and Rescue Robotics, Sendai, Japan, 21–24 October 2008; National Institute of Standards and Technology: Gaithersburg, MD, USA, 2006; pp. 22–24. [Google Scholar]
  44. Fiala, M. Pano-Presence for Teleoperation. In Proceedings of the 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, Edmonton, AB, Canada, 2–6 August 2005. [Google Scholar]
  45. Brayda, L.; Ortiz, J.; Mollet, N.; Chellali, R.; Fontaine, J.G. Quantitative and Qualitative Evaluation of Vision-Based Teleoperation of a Mobile Robot. In Proceedings of the ICIRA 2009: Intelligent Robotics and Applications, Singapore, 16–18 December 2009; Springer: Berlin/Heidelberg, Germany, 2009; pp. 792–801. [Google Scholar] [CrossRef]
  46. Yamauchi, B.; Massey, K. Stingray: High-Speed Teleoperation of UGVS in Urban Terrain Using Driver-Assist Behaviors and Immersive Telepresence. In Proceedings of the 26th Army Science Conference, Orlando, FL, USA, 1–4 December 2008. [Google Scholar]
  47. Doisy, G.; Ronen, A.; Edan, Y. Comparison of Three Different Techniques for Camera and Motion Control of a Teleoperated Robot. Appl. Ergon. 2017, 58, 527–534. [Google Scholar] [CrossRef] [PubMed]
  48. Johnson, S.; Rae, I.; Multu, B.; Takayama, L. Can You See Me Now? How Field of View Affects Collaboration in Robotic Telepresence. In Proceedings of the CHI 2015, Seoul, Republic of Korea, 18–23 April 2015. [Google Scholar]
  49. Adamides, G.; Katsanos, C.; Parmet, Y.; Christou, G.; Xenos, M.; Hadzilacos, T.; Edan, Y. HRI Usability Evaluation of Interaction Modes for a Teleoperated Agricultural Robotic Sprayer. Appl. Ergon. 2017, 62, 237–246. [Google Scholar] [CrossRef] [PubMed]
  50. Hughes, S.; Manojlovich, J.; Lewis, M.; Gennari, J. Camera Control and Decoupled Motion for Teleoperation. In Proceedings of the 2003 IEEE International Conference on Systems, Man and Cybernetics (SMC’03), Washington, DC, USA, 5–8 October 2003; pp. 1820–1825. [Google Scholar]
  51. Tomšik, R. Power Comparisons of Shapiro-Wilk, Kolmogorov-Smirnov and Jarque-Bera Tests. Sch. J. Res. Math. Comput. Sci. 2019, 3, 238–243. [Google Scholar]
  52. Ogundipe, A.A.; Sha’ban, A.I. Statistics for Engineers: An Introduction to Hypothesis Testing and ANOVA; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar]
  53. Jain, P.K. Applied Nonparametric Statistical Methods; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
Figure 1. Three tested configurations of camera settings on UGV and their FOV: (a) configuration 1; (b) configuration 2; (c) configuration 3.
Figure 1. Three tested configurations of camera settings on UGV and their FOV: (a) configuration 1; (b) configuration 2; (c) configuration 3.
Applsci 14 08297 g001
Figure 2. Test tracks of (a) trial 1; (b) trial 2.
Figure 2. Test tracks of (a) trial 1; (b) trial 2.
Applsci 14 08297 g002
Figure 3. Example view from research: (a) trial 1; (b) trial 2.
Figure 3. Example view from research: (a) trial 1; (b) trial 2.
Applsci 14 08297 g003
Figure 4. Experimental stand: (a) controlled UGV; (b) operator station.
Figure 4. Experimental stand: (a) controlled UGV; (b) operator station.
Applsci 14 08297 g004
Figure 5. Driving time for 3 camera configurations and driving with direct observation of the surrounding—trial 1.
Figure 5. Driving time for 3 camera configurations and driving with direct observation of the surrounding—trial 1.
Applsci 14 08297 g005
Figure 6. Reaching the obstacle time for 3 camera configurations and driving with direct observation of the surrounding—trial 1—task 1.
Figure 6. Reaching the obstacle time for 3 camera configurations and driving with direct observation of the surrounding—trial 1—task 1.
Applsci 14 08297 g006
Figure 7. Cornering time for 3 camera configurations and driving with direct observation of the surrounding—trial 1—task 2.
Figure 7. Cornering time for 3 camera configurations and driving with direct observation of the surrounding—trial 1—task 2.
Applsci 14 08297 g007
Figure 8. Departure time to the open space for 3 camera configurations and driving with direct observation of the surrounding—trial 1—task 3.
Figure 8. Departure time to the open space for 3 camera configurations and driving with direct observation of the surrounding—trial 1—task 3.
Applsci 14 08297 g008
Figure 9. Number of lane violations per ride for 3 camera configurations—trial 1.
Figure 9. Number of lane violations per ride for 3 camera configurations—trial 1.
Applsci 14 08297 g009
Figure 10. Average travel time for 3 camera configurations and driving with direct observation of the surrounding—trial 2.
Figure 10. Average travel time for 3 camera configurations and driving with direct observation of the surrounding—trial 2.
Applsci 14 08297 g010
Figure 11. Average of number of undetected and unrecognized obstacles per ride for 3 camera configurations—trial 2.
Figure 11. Average of number of undetected and unrecognized obstacles per ride for 3 camera configurations—trial 2.
Applsci 14 08297 g011
Figure 12. Average of number of obstacles and lane violations per ride for 3 camera configurations—trial 2.
Figure 12. Average of number of obstacles and lane violations per ride for 3 camera configurations—trial 2.
Applsci 14 08297 g012
Table 1. Summary of the results of the analysis of normality distribution of the Shapiro–Wilk test.
Table 1. Summary of the results of the analysis of normality distribution of the Shapiro–Wilk test.
Parameter Configuration 1Configuration 2Configuration 3Direct Observation
t1W0.992830.982340.989970.96140
p0.438410.012890.177010.02992
t1_1W0.990940.992290.988210.96836
p0.244300.373490.096940.07362
t1_2W0.994700.996300.994950.95412
p0.705570.914680.742870.01198
t1_3W0.995420.992120.971610.93721
p0.809340.355250.000450.00164
t2W0.98410.989480.968810.99099
p0.023600.149910.000200.24777
n1W0.772940.895530.88745-
p0.021890.304780.26160-
n2W0.416080.531080.57489-
p0.000010.000010.00001-
n2_1W0.594490.852190.42342-
p0.000010.000010.00001-
n2_2W0.851580.416080.09816-
p0.000010.000010.00001-
n2_3W0.875590.594560.28170-
p0.000010.000010.00001-
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cieślik, K.; Krogul, P.; Muszyński, T.; Przybysz, M.; Rubiec, A.; Typiak, R.K. Influence of Camera Placement on UGV Teleoperation Efficiency in Complex Terrain. Appl. Sci. 2024, 14, 8297. https://doi.org/10.3390/app14188297

AMA Style

Cieślik K, Krogul P, Muszyński T, Przybysz M, Rubiec A, Typiak RK. Influence of Camera Placement on UGV Teleoperation Efficiency in Complex Terrain. Applied Sciences. 2024; 14(18):8297. https://doi.org/10.3390/app14188297

Chicago/Turabian Style

Cieślik, Karol, Piotr Krogul, Tomasz Muszyński, Mirosław Przybysz, Arkadiusz Rubiec, and Rafał Kamil Typiak. 2024. "Influence of Camera Placement on UGV Teleoperation Efficiency in Complex Terrain" Applied Sciences 14, no. 18: 8297. https://doi.org/10.3390/app14188297

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop