Next Article in Journal
A Novel Carrier Tracking Approach for GPS Signals Based on Gauss–Hermite Kalman Filter
Previous Article in Journal
Hybrid Beamforming for MISO System via Convolutional Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Design of Multimodal Sensor Module for Outdoor Robot Surveillance System

Korea Institute of Robotics and Technology Convergence, Pohang 37666, Korea
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(14), 2214; https://doi.org/10.3390/electronics11142214
Submission received: 9 June 2022 / Revised: 13 July 2022 / Accepted: 13 July 2022 / Published: 15 July 2022
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
Recent studies on surveillance systems have employed various sensors to recognize and understand outdoor environments. In a complex outdoor environment, useful sensor data obtained under all weather conditions, during the night and day, can be utilized for application to robots in a real environment. Autonomous surveillance systems require a sensor system that can acquire various types of sensor data and can be easily mounted on fixed and mobile agents. In this study, we propose a method for modularizing multiple vision and sound sensors into one system, extracting data synchronized with 3D LiDAR sensors, and matching them to obtain data from various outdoor environments. The proposed multimodal sensor module can acquire six types of images: RGB, thermal, night vision, depth, fast RGB, and IR. Using the proposed module with a 3D LiDAR sensor, multimodal sensor data were obtained from fixed and mobile agents and tested for more than four years. To further prove its usefulness, this module was used as a monitoring system for six months to monitor anomalies occurring at a given site. In the future, we expect that the data obtained from multimodal sensor systems can be used for various applications in outdoor environments.

Graphical Abstract

1. Introduction

Studies using various sensors have mainly been conducted in confined environments for industrial monitoring and manipulation. Recently, they have attracted attention as a solution for wide-area security, owing to the development of networking and improvement in sensor performance. The data obtained from various sensors are used to recognize the environment and understand abnormal situations for security over a wide area. To build an autonomous surveillance system in an outdoor environment, a sensor system that is robust against weather conditions such as snow, rain, and wind that can be used through the day and night time is necessary. Most outdoor surveillance systems employ fixed vision sensors to perform monitoring by acquiring color and infrared images during the day and night [1]. Such vision-based unmanned surveillance systems can perform continuous monitoring, but the surveillance area is extremely limited. Surveillance systems other than those used for local monitoring should provide autonomous monitoring by using both fixed and mobile agents. Therefore, a method that is robust against vibration and has excellent durability is required for mounting fixed and mobile agents. A vision sensor system that can synchronize and collect sensor data for agents to detect and autonomously drive complex environments in outdoor surveillance systems is required.
Research using a variety of sensors in the field of surveillance has mostly dealt with limited environments such as industrial monitoring or manipulation. Many researchers have used vision sensors for decades in various fields. These sensors are used for vision-related applications, such as object recognition, tracking, and situational awareness [2]. However, because most technologies learn using sensor data collected in stable environments during the day, data from different environments are required. Many studies have been conducted to generate data using various sensors. The most representative method is to collect data by installing equipment on a vehicle that is driven around, such as in a mobile mapping system [3,4]. These methods are being used in various ways, such as data for confirming the conditions of roads [5], obtaining data for autonomous driving [6], and learning data for recognition [4,7]. However, it is difficult to apply the data collected using this method to the fixed and mobile agents of an autonomous monitoring system simultaneously. In other words, it should be possible to acquire and apply various types of data that can overcome changes due to weather and time in an outdoor environment.
There have been various approaches for collecting various types of data for the purpose of sensor fusion. First, monitoring is possible with a monitoring sensor module for healthcare in a residential environment, but it includes a sensor that can be used in forms of indoor services only [8,9]. Robotic multimodal sensors are used for manipulation [10]. They focus on grasping objects and can be table mounted. For agent motion, multimodal sensors for mobile robots use encoder force and distance sensors for detection, rather than vision sensors for accurate postures [11]. Multimodal sensors for mobile agents only focus on indoor localization [12]. Studies to overcome the challenges of the outdoor environment, such as snow, help autonomous driving by learning with color and infrared images; however, it seems insufficient to recognize abnormal situations in outdoor monitoring areas [13]. Multimodal sensors for object detection in marine environments use color-based learning and can only be used during the daytime [14]. Sensor fusion for autonomous vehicle navigation is a method that uses only the active depth information from the RGB and LiDAR cameras [15,16]. The method using video and sound can only respond to intruders in a room [17]. Because the outdoor autonomous monitoring system must be able to operate for 24 h and under all weather conditions, the sensor system needs to be equipped with multiple vision sensors and sound sensors.
A method using a pan and tilt camera or a multi-modal sensor to secure a monitoring area has been proposed for the monitoring system only [18,19]. A video-based security system using a mobile robot [20] has been proposed, though it has certain limitations in the surveillance area: however, only a fixed or mobile agent is of little use. Studies employing heterogeneous agents are not suitable for outdoor spaces covering a large area because they operate indoors [21,22,23]. Research on the connection between mobile robots and surveillance cameras is limited to indoor environments, and scenarios for surveillance and response missions have been specified [24]. Although a method that can monitor a wide area based on wireless communication has been introduced [25], this method refers to video streaming over a wide area. Therefore, it is necessary to employ both fixed and mobile agents to cover wide monitoring areas [26,27]. In conclusion, agents with the same sensor system can be designed for high-level monitoring systems because they can simultaneously monitor and respond to abnormal situations. In addition, images captured in hazy or foggy weather conditions are seriously degraded by the scattering of atmospheric particles, which directly influences the performance of outdoor computer vision systems [28]. The monitoring systems can improve data acquisition through effective filtering and compression for real-world use [29,30,31].
In this study, we propose a multimodal sensor module for multiple fixed and mobile agents in outdoor environments. The proposed module provides six types of vision sensor data and four-channel sound data that are synchronized and integrates them using 3D LiDAR and a calibration method. Moreover, this module enables integrated data collection for 24 h monitoring of the surveillance area. For long-term outdoor operation, heat resistance and durability were considered, and a vibration-damping device and cover were included. We verified its usefulness by performing long-term operations and monitoring scenarios on a real site.
The remainder of this paper is organized as follows. In Section 2, we introduce a multimodal sensor module designed and manufactured for outdoor agents, and in Section 3, we describe a method of acquiring data by mounting a module on agents and a calibration method. Section 4 shows the application of the multimodal sensor module. Finally, in Section 5, we conclude the study.

2. Design of a Multi-Modal Sensor System

2.1. Multi-Modal Sensor System Configuration

To employ multiple sensors in one module, it is important to unify the interfaces for power and data. Figure 1 shows the configuration of a multimodal sensor system for mounting on an agent. The multimodal sensor system includes multiple vision sensors and 3D LiDAR and employs a single power (48V, DC) supply and three interfaces (analog, USB 3.0, and GigE). It is enclosed in one case, which includes a seal, cooler, and sunshade, and is connected to a damper for low-frequency vibrations. The sensor data generated from this module were synchronized with the 3D LiDAR data and calibrated.

2.2. Overall Structure and Arrangement of Multi-Modal Sensor Module

The multimodal sensor module contains sensors and devices, including multiple cameras. Figure 2 shows the overall structure of the multimodal sensor module. First, a reference image was captured using an RGB image for the entire field of view. In addition, night vision images provide information even in a dark environment. The depth image can measure the distance to the object in a three-dimensional field of view. Furthermore, a thermal image sensor detects the thermal point and temperature range during an emergency, such as a fire. A microphone was installed on each of the four sides of the multi-modal sensor to detect abnormal sounds. Finally, considering the heat and vibration, a sunshade and damper, were installed on the module.
When multiple cameras and sensors are combined as a single module, the arrangement is important in order to calibrate the acquired data more easily. Therefore, as shown in Figure 3, cameras and sensors were placed at regular intervals along the horizontal and vertical directions. The RGB and Depth camera was installed horizontally. Two cameras were in-built into the module (Night Vision and RGB and Depth cameras), arranged at equal intervals (a). With one of these cameras as a reference, the night vision camera was directed in the opposite directions at the same distance. A global shutter camera was installed at a distance (b) vertically below the right reference camera. The thermal camera was located at a distance (a) horizontally from the global shutter camera and (b) vertically below the left reference camera. Placing the camera in this way reduces the factors that need to be considered when integrating information data and can benefit the computational process. Table 1 lists the field of view of each sensor installed in the module.

2.3. Waterproof and Cooling Design

Considering its use in an outdoor environment, as shown in Figure 4, the module can offer protection during rainy weather. The waterproof seal fits around the lens of the camera exposed to the outside environment. Moreover, the microphone sensors are included waterproofing. A cooling fan and hole for cable entry are located at the bottom of the multi-modal sensor. Therefore, the design includes a rain gutter for rainwater along the edge at the bottom to prevent rainwater from flowing down and into the interior of the module.
Because the inside of the module is dense, with cameras and sensors, which causes high heat, the design provides for heat dissipation, as shown in Figure 5. A cooling fan is located at the bottom of the multi-modal sensor to allow outside air, which is cooler than the inside of the module, to flow inside and the hot air to escape through the vents on both sides of the case. The cover of the vent has a certain depth (c) for preventing rainwater from flowing through the vent during rainy weather. The air heated by the camera and other devices inside the module collects in the upper space. Therefore, the case is made of aluminum, which has a high thermal conductivity and permits heat to be radiated owing to heat conduction. In addition, to increase the radiation of heat by increasing the area in contact with air, the outer upper region of the case is designed as a heat sink.

2.4. Sunshade and Damper Design

For outdoor use, a sunshade was designed because the device may be damaged by exposure to sunlight for a long time, as shown in Figure 6. The cover of the sunshade is provided with a mesh of black fabric to absorb heat from sunlight and provide good ventilation between the inside and outside of the cover. The guide frame material of the sunshade is plastic so that the heat of the shade does not easily reach the multimodal sensor.
A damper was designed to absorb the vibration of a moving mobile robot and obtain clear information from the multimodal sensor, as shown in Figure 7. The upper and lower frames of this damper were assembled using wire damper parts that were inclined at 45°, and this wire damper uses a total of six such wires at 60° intervals.

3. Multi-Modal Sensor Calibration Method

3.1. Multi-Modal Sensor Synchronization

For the synchronization of multimodal sensor data, the data acquisition system is implemented as a data acquisition process (Grabber process) and a data storage process (Saver process) with a common shared space in the memory, as shown in Figure 8. The Grabber process acquires information such as camera images and LiDAR sensor data for uploading data to the Memcached server, and the Saver process reads the data from the Memcached server [32] at regular intervals and saves it in a file format. Therefore, sensor data acquisition and storage processes can operate at different cycles within the system to transmit and receive data asynchronously.

3.2. Multi-Modal Sensor Calibration Method

A calibration method for the multiple types of image sensors mounted on a multimodal sensor module can be employed using a well-known method detailed in [33]. However, in the case of calibration between 3D LiDAR and color or thermal images, it is difficult to find a point on the 2D image coordinate system and a corresponding point on the 3D coordinate system because of the low resolution of the point cloud of the LiDAR. Figure 9 shows that it is difficult to accurately detect the vertex of a rectangular calibration board on a 3D point cloud. To calibrate these heterogeneous sensors, black and white patterns and vertices of the pattern board can be detected in the image using a Harris corner detection algorithm, and the coordinates of the vertices of the board are designated as the corresponding points used for the calibration. Next, the distance from the LiDAR sensor to the board is specified in advance to extract only the data that the point cloud forms on the pattern board from the 3D LiDAR, and filtering is performed using the value of the distance, as shown in Figure 10. Subsequently, only the point cloud information corresponding to the plane is found using the plane model segmentation algorithm, which predicts and classifies the plane based on the random sample consensus (RANSAC) algorithm.
Next, a three-dimensional plane equation is derived using the filtered information of the 3D coordinates of the point cloud. At this time, the RANSAC algorithm-based plane fitting algorithm was used for plane detection, and the parameters of the 3D plane equation with high satisfaction were obtained, as shown in Figure 11.
To obtain the coordinates of the point cloud formed on the edge of the calibration board, the first and last data for each channel of the LiDAR were obtained and then projected on a 3D plane. Then, using these data as inputs, we used the RANSAC-based line fitting algorithm to find four directional vectors that were orthogonal to each other through a relational analysis of the data and derived the line equation at the corner of the calibration board that the closely matched the data, as shown in Figure 12.
Figure 13 shows that extrinsic calibration can calculate the final calibration parameters from the corresponding points obtained through the previous procedure.
Finally, as shown in Figure 14, the calibration method is applied using a board including a heating wire in the grid so that the same method can be employed for color and thermal images. Figure 15 shows that the proposed method can increase the accuracy compared to finding the existing click-based correspondences. For the outdoor environment, the synchronized image, including the calibration result, can be obtained, as shown in Figure 16.

4. Application of Multi-Modal Sensor Calibration Method

4.1. Computing for Multi-Modal Sensor System

To apply the proposed multimodal sensor module to a surveillance system, a computing device capable of processing the acquired sensor data and transmitting it to the control system is required. For active security, modules that synchronize and acquire sensor data, recognize a moving object based on the acquired data, and recognize an abnormal situation are installed to communicate with the control room. The computing system includes a module, mounted on the fixed and mobile agents, together with a multimodal sensor module, as shown in Figure 17.

4.2. Multi-Modal Sensor System Install and Test

The proposed multimodal sensor module is installed in the Safety Robot Demonstration Center, Pohang and Nano Industrial Complex, Gwangju, Korea. The Pohang Safety Robot Demonstration Center is an area where external access is controlled, and the Gwangju Nano Industrial Complex is an area that includes public roads, as shown in Figure 18. After installation, it has operated continuously for more than 4 years, and long-term operation and testing based on day/night security scenarios have been performed, as shown in Figure 19. In contrast, multimodal sensor data obtained from fixed and mobile agents recognize abnormal situations using multilayer probability maps, as shown in Figure 20. Finally, Figure 21 shows human and car detection results for the proposed sensor module performance. The multi-modal sensor data can robustly detect objects of interest during not only the day- and nighttime, but also in most weather conditions such as fog and rain. They were able to recognize and respond to abnormal situations, such as fire, illegal parking, noise, and intruders [34].

5. Conclusions and Future Works

In this study, we introduced the design and implementation of a multimodal sensor module for a surveillance system. This module contained various types of cameras and was designed to be waterproof and to dissipate heat effectively. A device that provides a simple interface and power supply and one that attenuates vibration during movement was also included, and a variety of cameras were employed. The proposed sensor module provides corrected data by calibrating the synchronized sensor data (See Supplement). This module can be employed for both fixed and mobile agents and has been in operation for more than four years.
Moreover, it was installed at the Safety Robot Demonstration Center, Pohang and Nano Industrial Complex, Gwangju, and operated as a surveillance system for more than six months to complete the demonstration test consistently. Therefore, it may be possible to recognize and respond to abnormal situations by using a multimodal sensor module during the daytime and nighttime. In other words, multimodal sensor modules have proven to be a useful tool for all-time outdoor security.
In the future, we plan to upgrade the module to be applicable in harsher environments (e.g., polar regions) and evaluate its usefulness. In addition, we plan to apply this to agent systems in other fields, such as medical institutions, smart factories, and Arctic exploration.

Supplementary Materials

The following are available our source code and documentation online at https://github.com/lge-robot-navi/Multi-modal-Sensor-based-System (accessed on 8 June 2022).

Author Contributions

Conceptualization, T.U. and Y.C.; methodology, J.P. and J.L.; software, G.B. and J.L.; validation, J.P. and Y.C.; formal analysis, T.U.; investigation, T.U., G.B. and G.K.; resources, J.P.; data curation, J.L.; writing—original draft preparation, T.U., G.B. and G.K.; writing—review and editing, Y.C.; visualization, T.U. and G.B.; supervision, Y.C.; project administration, T.U. and Y.C.; funding acquisition, T.U. and Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the Institute for Information & Communication Technology Promotion (IITP) grant, funded by the Korea government (MSIT) (No. 2017-0-00306), and in part by the Korean Evaluation Institute of Industrial Technology (KEIT), funded by the Ministry of Trade Industry and Energy (MOTIE) (No. 10080489).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the researchers of LG Electronics, ETRI, KIRO, SNU, REDONE, RASTECH, and SKKU for the five years of the experiment.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Haritaoglu, I.; Harwood, D.; Davis, S.L. A Fast Background Scene Modeling and Maintenance for Outdoor Surveillance. In Proceedings of the 15th International Conference on Pattern Recognition (ICPR), Barcelona, Spain, 3–8 September 2000; pp. 179–183. Available online: https://ieeexplore.ieee.org/abstract/document/902890 (accessed on 7 March 2022).
  2. Kurada, S.; Bradley, C. A review of machine vision sensors for tool condition monitoring. Comput. Ind. 1997, 34, 55–72. Available online: https://www.sciencedirect.com/science/article/pii/S0166361596000759 (accessed on 7 March 2022). [CrossRef]
  3. Naverlabs Dataset. Available online: http://github.com/naver/kapture (accessed on 7 March 2022).
  4. Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets Robotics: The KITTI Dataset. Int. J. Robot. Res. 2013, 32, 1231–1237. Available online: https://journals.sagepub.com/doi/full/10.1177/0278364913491297 (accessed on 7 March 2022). [CrossRef] [Green Version]
  5. Chen, Y.L.; Jahanshahi, M.R.; Manjunatha, P.; Gan, W.; Abdelbarr, M.; Masri, S.F.; Gerber, B.B.; Caffrey, J.P. Inexpensive Multimodal Sensor Fusion System for Autonomous Data Acquisition of Road Surface Conditions. Sensors 2016, 21, 7731–7743. Available online: https://ieeexplore.ieee.org/abstract/document/7556304 (accessed on 7 March 2022). [CrossRef]
  6. Bechtel, B.; Alexander, P.J.; Böhner, J.; Ching, J.; Conrad, O.; Feddema, J.; Mills, G.; See, L.; Stewart, I. Mapping Local Climate Zones for a Worldwide Database of the Form and Function of Cities. ISPRS Int. J. Geo-Inf. 2015, 4, 199–219. Available online: https://www.mdpi.com/2220–9964/4/1/199 (accessed on 7 March 2022). [CrossRef] [Green Version]
  7. Ferryman, J.; Shahrokni, A. PETS2009: Dataset and challenge. In Proceedings of the Twelfth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, Snowbird, UT, USA, 7–9 December 2009; Available online: https://ieeexplore.ieee.org/abstract/document/5399556 (accessed on 7 March 2022).
  8. Woznowski, P.; Fafoutis, X.; Song, T.; Hannuna, S.; Camplani, M.; Tao, L.; Paiement, A.; Mellios, E.; Haghighi, M.; Zhu, N.; et al. A Multi-modal Sensor Infrastructure for Healthcare in a Residential Environment. In Proceedings of the IEEE International Conference on Communication Workshop (ICCW), London, UK, 8–12 June 2015; Available online: https://ieeexplore.ieee.org/abstract/document/7247190 (accessed on 7 March 2022).
  9. Harper, S.E.; Schmitz, D.G.; Adamczyk, P.G.; Thelen, D.G. Fusion of Wearable Kinetic and Kinematic Sensors to Estimate Triceps Surae Work during Outdoor Locomotion on Slopes. Sensors 2022, 22, 1589. Available online: https://www.mdpi.com/1424–8220/22/4/1589 (accessed on 7 March 2022). [CrossRef] [PubMed]
  10. Weiner, P.; Neef, C.; Shibata, Y.; Nakamura, Y.; Asfour, T. An Embedded, Multi-Modal Sensor System for Scalable Robotic and Prosthetic Hand Fingers. Sensors 2020, 20, 101. Available online: https://www.mdpi.com/1424–8220/20/1/101 (accessed on 7 March 2022). [CrossRef] [PubMed] [Green Version]
  11. Marín, L.; Vallés, M.; Soriano, Á.; Valera, Á.; Albertos, P. Multi Sensor Fusion Framework for Indoor-Outdoor Localization of Limited Resour. Sensors 2013, 13, 14133–14160. Available online: https://www.mdpi.com/1424-8220/13/10/14133 (accessed on 7 March 2022). [CrossRef] [PubMed]
  12. Klingbeil, L.; Reiner, R.; Romanovas, M.; Traechtler, M.; Manoli, Y. Multi-modal Sensor Data and Information Fusion for Localization in Indoor Environments. In Proceedings of the 7th Workshop on Positioning, Navigation and Communication (WPNC), Dresden, Germany, 11–12 March 2010; Available online: https://ieeexplore.ieee.org/abstract/document/5654128 (accessed on 7 March 2022).
  13. Vachmanus, S.; Ravankar, A.A.; Emaru, T.; Kobayashi, Y. Multi-Modal Sensor Fusion-Based Semantic Segmentation for Snow Driving Scenarios. Sensors 2021, 21, 16839–16851. Available online: https://ieeexplore.ieee.org/abstract/document/9420724 (accessed on 7 March 2022). [CrossRef]
  14. Hong, B.; Zhou, Y.; Qin, H.; Wei, Z.; Liu, H.; Yang, Y. Few-Shot Object Detection Using Multimodal Sensor Systems of Unmanned Surface Vehicles. Sensors 2022, 22, 1511. Available online: https://www.mdpi.com/1424–8220/22/4/1511 (accessed on 7 March 2022). [CrossRef] [PubMed]
  15. Haris, M.; Glowacz, A. Navigating an Automated Driving Vehicle via the Early Fusion of Multi-Modality. Sensors 2022, 22, 1425. Available online: https://www.mdpi.com/1424–8220/22/4/1425 (accessed on 7 March 2022). [CrossRef] [PubMed]
  16. Khatab, E.; Onsy, A.; Abouelfarag, A. Evaluation of 3D Vulnerable Objects’ Detection Using a Multi-Sensors System for Autonomous Vehicles. Sensors 2022, 22, 1663. Available online: https://www.mdpi.com/1424–8220/22/4/1663 (accessed on 7 March 2022). [CrossRef] [PubMed]
  17. Park, J.H.; Sim, K.B. A Design of Mobile Robot based on Network Camera and Sound Source Localization for Intelligent Surveillance System. In Proceedings of the International Conference on Control, Automation and Systems (ICCAS), Seoul, Korea, 14–17 October 2008; Available online: https://ieeexplore.ieee.org/abstract/document/4694586 (accessed on 7 March 2022).
  18. Raimondo, D.M.; Kariotoglou, N.; Summers, S.; Lygeros, J. Probabilistic certification of pan-tilt-zoom camera surveillance systems. In Proceedings of the IEEE Conference on Decision and Control and European Control Conference (CDC-ECC), Orlando, FL, USA, 12–15 December 2011; Available online: https://ieeexplore.ieee.org/abstract/document/6161534 (accessed on 7 March 2022).
  19. Prati, A.; Vezzani, R.; Benini, L.; Farella, E.; Farella, P. An Integrated Multi-Modal Sensor Network for Video Surveillance. In Proceedings of the Third ACM International Workshop on Video Surveillance & Sensor Networks, Singapore, 11 November 2005; Available online: https://dl.acm.org/doi/abs/10.1145/1099396.1099415 (accessed on 7 March 2022).
  20. Chakravarty, P.; Jarvis, R. External Cameras & A Mobile Robot: A Collaborative Surveillance System. In Proceedings of the Australasian Conference on Robotics and Automation (ACRA), Sydney, Australia, 2–4 December 2009; Available online: https://www.araa.asn.au/acra/acra2009/papers/pap135s1.pdf (accessed on 7 March 2022).
  21. Menegatti, E.; Mumolo, E.; Nolich, M.; Pagello, E. A Surveillance System based on Audio and Video Sensory Agents cooperating with a Mobile Robot. In Proceedings of the 8th International Conference on Intelligent Autonomous Systems (IAS-8), Amsterdam, The Netherlands, 10–13 March 2004; Available online: https://www.academia.edu/9115938/A_Surveillance_System_based_on_Audio_and_Video_Sensory_Agents_cooperating_with_a_Mobile_Robot (accessed on 13 July 2022).
  22. Wu, X.; Gong, H.; Chen, P.; Zhong, Z.; Yangsheng, X. Surveillance Robot Utilizing Video and Audio Information. J. Intell. Robot. Syst. 2009, 55, 403–421. Available online: https://link.springer.com/article/10.1007/s10846–008–9297–3 (accessed on 7 March 2022). [CrossRef]
  23. López, J.; Pérez, D.; Paz, E.; Santana, A. WatchBot: A building maintenance and surveillance system based on autonomous robots. Robot. Auton. Syst. 2013, 61, 1559–1571. Available online: https://www.sciencedirect.com/science/article/pii/S0921889013001218 (accessed on 7 March 2022). [CrossRef]
  24. Siebel, N.T.; Maybank, S. The ADVISOR Visual Surveillance System. In Proceedings of the ECCV 2004 Workshop Applications of Computer Vision (ACV), Prague, Czech Republic, 11–14 May 2004; Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.158.4852&rep=rep1&type=pdf (accessed on 7 March 2022).
  25. Clavel, C.; Ehrette, T.; Richard, G. Events Detection for an Audio-based Surveillance System. In Proceedings of the IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands, 6–7 July 2005; Available online: https://ieeexplore.ieee.org/abstract/document/1521669 (accessed on 7 March 2022).
  26. Zhang, T.; Chowdhery, A.; Bahl, P.; Jamieson, K.; Banerjee, S. The Design and Implementation of a Wireless Video Surveillance System. In Proceedings of the 21st Annual International Conference on Mobile Computing and Networking (MoviCom’15), Paris, France, 7–11 September 2015; Available online: https://dl.acm.org/doi/abs/10.1145/2789168.2790123 (accessed on 7 March 2022).
  27. Chun, W.H.; Papanikolopoulos, N. Robot Surveillance and Security. In Robot Surveillance and Security, 2nd ed.; Siciliano, B., Khatib, O., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; Volume 1, pp. 1605–1626. Available online: https://link.springer.com/chapter/10.1007/978–3-319–32552–1_61 (accessed on 7 March 2022).
  28. Wang, W.; Yuan, X.; Wu, X.; Liu, Y. Fast Image Dehazing Method Based on Linear Transformation. IEEE Trans. Multimed. 2017, 19, 1142–1155. [Google Scholar] [CrossRef]
  29. Ouahabi, A. (Ed.) Signal and Image Multiresolution Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2012; Available online: https://www.wiley.com/en-us/9781118568668 (accessed on 8 July 2022).
  30. Haneche, H.; Ouahabi, A.; Boudraa, B. New mobile communication system design for Rayleigh environments based on compressed sensing-source coding. IET Commun. 2019, 13, 2375–2385. [Google Scholar] [CrossRef]
  31. Mahdaoui, A.E.; Ouahabi, A.; Moulay, M.S. Image denoising using a compressive sensing approach based on regularization constraints. Sensors 2022, 22, 2199. [Google Scholar] [CrossRef] [PubMed]
  32. What Is Memcached? Available online: http://www.memcached.org/ (accessed on 7 March 2022).
  33. Zhang, Z. A Flexible New Technique for Camera Calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. Available online: https://ieeexplore.ieee.org/abstract/document/888718 (accessed on 7 March 2022). [CrossRef] [Green Version]
  34. Shin, H.; Na, K.I.; Chang, J.; Uhm, T. Multimodal layer surveillance map based anomaly detection using multi-agents for smart city security. ETRI J. 2022, 44, 183–193. Available online: https://onlinelibrary.wiley.com/doi/full/10.4218/etrij.2021–0395 (accessed on 7 March 2022). [CrossRef]
Figure 1. Multi-modal sensor system configuration.
Figure 1. Multi-modal sensor system configuration.
Electronics 11 02214 g001
Figure 2. Multi-modal sensor system configuration.
Figure 2. Multi-modal sensor system configuration.
Electronics 11 02214 g002
Figure 3. Arrangement of sensors. (a) Horizontal distance and (b) vertical distance.
Figure 3. Arrangement of sensors. (a) Horizontal distance and (b) vertical distance.
Electronics 11 02214 g003
Figure 4. Arrangement of sensors. Measures to prevent rainwater inflow: (a) Front view (translucent); (b) bottom view.
Figure 4. Arrangement of sensors. Measures to prevent rainwater inflow: (a) Front view (translucent); (b) bottom view.
Electronics 11 02214 g004
Figure 5. Measures about heat exhaust and cooling. The cover of the vent has a certain depth (c) for preventing rainwater from flowing through the vent during rainy weather.
Figure 5. Measures about heat exhaust and cooling. The cover of the vent has a certain depth (c) for preventing rainwater from flowing through the vent during rainy weather.
Electronics 11 02214 g005
Figure 6. Sunshade for multi-modal sensor module.
Figure 6. Sunshade for multi-modal sensor module.
Electronics 11 02214 g006
Figure 7. Damper for multi-modal sensor module.
Figure 7. Damper for multi-modal sensor module.
Electronics 11 02214 g007
Figure 8. Data acquisition process for synchronization of multimodal sensor data.
Figure 8. Data acquisition process for synchronization of multimodal sensor data.
Electronics 11 02214 g008
Figure 9. Examples of images and 3D LiDAR data.
Figure 9. Examples of images and 3D LiDAR data.
Electronics 11 02214 g009
Figure 10. Noise filtering for 3D LiDAR data of interests.
Figure 10. Noise filtering for 3D LiDAR data of interests.
Electronics 11 02214 g010
Figure 11. 3D Plane Fitting.
Figure 11. 3D Plane Fitting.
Electronics 11 02214 g011
Figure 12. Four-lines filtering for pattern corners.
Figure 12. Four-lines filtering for pattern corners.
Electronics 11 02214 g012
Figure 13. Four-correspondence for calibration.
Figure 13. Four-correspondence for calibration.
Electronics 11 02214 g013
Figure 14. RGB and thermal input images and 3D LiDAR data.
Figure 14. RGB and thermal input images and 3D LiDAR data.
Electronics 11 02214 g014
Figure 15. Comparison of (a) click-based correspondences method and (b) the proposed method.
Figure 15. Comparison of (a) click-based correspondences method and (b) the proposed method.
Electronics 11 02214 g015
Figure 16. Example of the calibrated data.
Figure 16. Example of the calibrated data.
Electronics 11 02214 g016
Figure 17. Multi-modal sensor module mounting: (top) multi-modal sensor module and mounted on mobile agents; (bottom) computing system and fixed agents with multi-modal sensor module.
Figure 17. Multi-modal sensor module mounting: (top) multi-modal sensor module and mounted on mobile agents; (bottom) computing system and fixed agents with multi-modal sensor module.
Electronics 11 02214 g017
Figure 18. Multi-modal sensor module based 8 fixed agents installation in the Safety Robot Demonstration Center, Pohang (36.119066, 129.415796) and the Nano Industrial Complex, Gwangju (35.244247, 126.835933).
Figure 18. Multi-modal sensor module based 8 fixed agents installation in the Safety Robot Demonstration Center, Pohang (36.119066, 129.415796) and the Nano Industrial Complex, Gwangju (35.244247, 126.835933).
Electronics 11 02214 g018
Figure 19. Multi-modal sensor module based 6 mobile agents-based day/night surveillance system test include abnormal situations (e.g., fire, illegal parking, and intruders).
Figure 19. Multi-modal sensor module based 6 mobile agents-based day/night surveillance system test include abnormal situations (e.g., fire, illegal parking, and intruders).
Electronics 11 02214 g019
Figure 20. Example of observation regions using multi-modal sensor module based fixed and mobile agents-based day/night surveillance system include abnormal probability.
Figure 20. Example of observation regions using multi-modal sensor module based fixed and mobile agents-based day/night surveillance system include abnormal probability.
Electronics 11 02214 g020
Figure 21. Examples of human and car detection results during (a) day- and nighttime and (b) fog environment.
Figure 21. Examples of human and car detection results during (a) day- and nighttime and (b) fog environment.
Electronics 11 02214 g021
Table 1. Field of view for multi-modal sensors.
Table 1. Field of view for multi-modal sensors.
SensorsField of View
RGB and Depth Camera IR(2EA) 85.2° (H) × 58° (V), RGB 69.4° (H) × 42.5° (V)
Thermal Camera90° (H) × 69° (V)
Night Vision Camera116.8° (H) × 101.3° (V)
3D LiDAR360° (H) × 30° (V)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Uhm, T.; Park, J.; Lee, J.; Bae, G.; Ki, G.; Choi, Y. Design of Multimodal Sensor Module for Outdoor Robot Surveillance System. Electronics 2022, 11, 2214. https://doi.org/10.3390/electronics11142214

AMA Style

Uhm T, Park J, Lee J, Bae G, Ki G, Choi Y. Design of Multimodal Sensor Module for Outdoor Robot Surveillance System. Electronics. 2022; 11(14):2214. https://doi.org/10.3390/electronics11142214

Chicago/Turabian Style

Uhm, Taeyoung, Jeongwoo Park, Jungwoo Lee, Gideok Bae, Geonhui Ki, and Youngho Choi. 2022. "Design of Multimodal Sensor Module for Outdoor Robot Surveillance System" Electronics 11, no. 14: 2214. https://doi.org/10.3390/electronics11142214

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop