Next Article in Journal
Some Approaches to the Calculation of Conservation Laws for a Telegraph System and Their Comparisons
Previous Article in Journal
Network Embedding via a Bi-Mode and Deep Neural Network Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Sensor Network-Based Indoor Surveillance System for Intrusion Detection

1
Department of Image, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Korea
2
Multidisciplinary Sensor Research Group, Electronics and Telecommunications Research Institute (ETRI), Gajeong-ro 218, Yuseong-gu, Daejeon 34129, Korea
*
Author to whom correspondence should be addressed.
Symmetry 2018, 10(6), 181; https://doi.org/10.3390/sym10060181
Submission received: 19 April 2018 / Revised: 20 May 2018 / Accepted: 21 May 2018 / Published: 23 May 2018

Abstract

:
This paper presents a novel hybrid sensor-based intrusion detection system for low-power surveillance in an empty, sealed indoor space with or without illumination. The proposed system includes three functional steps: (i) initial detection of an intrusion event using a sound field sensor; (ii) automatic lighting control based on the detected event, and (iii) detection and tracking the intruder using an image sensor. The proposed hybrid sensor-based surveillance system uses a sound field sensor to detect an abnormal event in a very low-light or completely dark environment for 24 h a day to reduce the power consumption. After detecting the intrusion by the sound sensor, a collaborative image sensor takes over an accurate detection and tracking tasks. The proposed hybrid system can be applied to various surveillance environments such as an office room after work, empty automobile, safety room in a bank, and armory room. This paper deals with fusion of computer-aided pattern recognition and physics-based sound field analysis that reflects the symmetric aspect of computer vision and physical analysis

1. Introduction

An automatic indoor surveillance system should be able to detect dangerous events such as illegal intrusion for 24 h a day without human monitoring. However, a single image sensor-based system often fails to detect the event because of various unstable illumination conditions. To solve this problem, multiple hybrid sensors can collaborate to increase the detection accuracy and stability [1].
Performance of a general vision-based surveillance system is limited under a low-illumination condition. Although many low-light image enhancement methods were recently proposed [2,3], the surveillance function does not work if there is no light. To illuminate an empty indoor space to acquire high-quality images, high power consumption is unavoidable. Typical non-visual sensors such as passive infrared (PIR) and thermal sensors can be used to detect under a low-light condition. However, for example, the movement behind an obstacle such as a wall cannot be detected. A combined procedure of generating a sound wave generation and monitoring change in the sound wave configuration, referred to as ‘sound field technology’, is advantageous to solve the problem because most kinds of intrusion can be detected even in blind spots [4]. To cope with low-illumination and a blind spot problem, a sound field sensor can efficiently detect an abnormal intrusion using a pair speaker and microphone even in a completely dark environment. On the other hand, the sound field sensor-based surveillance system can only detect intrusion, but it cannot provide any additional information such as shape, behavior, color, or position of the intruding object. Furthermore, the sound field sensor is sensitive to vibration due to outdoor wind, air-conditioner, and acoustic noise. However, compared with in-ground intrusion detection such as optical fiber sensors, magnet, and radio frequency (RF) sensors, the sound field sensor has different features. In the real surveillance site, there are many false alarms due to the environmental noise, which can be filtered out using the relatively strong sound source with the multiple set of high-frequency sine wave. To take advantages of both image and sound field sensors, we present a combined hybrid sensor-based surveillance system.
The proposed surveillance system can first detect an intruding event in a dark, sealed space, such as an office after work, a safe room of a bank, and an armory room, as shown in Figure 1a. After the sound field sensor detects the intrusion, the system turns on the light for the vision sensor to start tracking the intruding object as shown in Figure 1b.
The main contribution of this paper consists of two parts: (i) combination of an image sensor and sound field sensor to detect intrusion in a low illumination environment and (ii) power-efficient surveillance with illumination before an intrusion occurs.
The paper is organized as follows: Section 2 briefly summarized existing intrusion detection techniques, Section 3 describes several different visual surveillance systems, and Section 4 presents a novel sound field sensor-based intrusion detection technique. The combined hybrid sensor-based surveillance system is presented in Section 5. After demonstrating the performance of the proposed system using experimental results in Section 6, Section 7 concludes the paper.

2. Related Works

Various image processing and computer vision algorithms were proposed to detect an illegal intrusion using image sensors such as a closed-circuit television (CCTV) or internet protocol (IP) cameras [5]. A general visual surveillance system generates an alarm signal if an object entering a pre-specified region using background generation, frame difference, or motion information. Zhan et al. detected an intruding object using frame difference and edge detection [6]. Yuan et al. detected the intrusion event by first detecting the object using background difference, and generates Garbor features for the support vector machine (SVM) classification [7]. Dastida et al. first generated a background image, and detected an intruding object using correlation between the object of interest and background [8]. Zhang et al. used the Fourier descriptor (FD) and histogram of oriented gradient (HoG) to develop a perimeter intrusion detection (PID) system [9]. Chen et al. pre-assigned a dangerous region-of-interest, and started detecting an object by generating a Gaussian mixture model (GMM)-based background image, and then generated an alarm if the object fell in the pre-specified region [10]. Although the background generation-based frame difference method is sensitive to the accuracy of the generated background, it can efficiently detect the foreground object in a fixed camera-based surveillance system. Hariyono et al. detected a moving object using optical flow with Kanade-Lucas-Tomasi (KLT) method [11], and Hossen et al detected abnormal object using motion vector based on Horn–Schunck optical flow [12]. Motion optical flow estimation-based object detection method can detect an object in moving camera. However, the estimated motion is very sensitive to the noise in the image. Chauhan et al. proposed a moving object detection method that combines a GMM-based background difference and optical flow to compensate for the disadvantages of typical motion estimation based methods and background difference-based methods [13]. Additional research on the visual surveillance techniques can be found in [14,15,16,17,18,19,20].
However, the image sensor cannot successfully detect an intrusion event without a sufficient amount of illumination. To solve the low-light illumination problem, various modality sensors were used such as acoustic sensors [21], robots [22], radar sensors [23,24,25], radio frequency (RF) [26], infrared (IR) sensors [27,28], and wireless sensor networks (WSN) [22,26,29,30,31,32]. To reduce the implementation cost, a sound field sensor is very efficient to detect an intrusion in an empty, sealed space. In this context, Lee et al. successfully detected intrusion events using the sound field sensor in [4].
In order to compensate for the weakness of a single sensor, many researchers proposed surveillance systems using several sensors. Andersson et al. proposed a two-stage fusion method based on acoustic and optical sensor data to detection an abnormal event [33]. Castro et al. proposed a multi-sensor intelligent system by integrating different information obtained from multiple sensors such as surveillance cameras, motion sensors or microphones [1]. Azzam proposed a surveillance system that detects and tracks an object using image and acoustic sensors [34].

3. Image Sensor-Based Intrusion Detection

The proposed image sensor-based intrusion detection method detects if a moving object crosses a pre-specified boundary in the region-of-interest (ROI). More specifically, a user a priori assigns the ROI with inbound and outbound directions. The correlation filter is used to track an object since it is faster than other spatial domain tracking algorithms [35,36,37,38,39]. Since the correlation filter-based MOSSE (Minimum Output Sum of Squared Error) is simple and learns the target appearance for each frame, it gives accurate, fast tracking results. It is proven that the MOSSE filter-based tracker is more suitable for estimating the trajectory of moving object than other types of trackers such as KLT [40,41], mean-shift [42], Kalman [43] and particle filter [44].
The MOSSE filter, referred to as H, minimizes the squared error of the residual
i F i H * G i 2 ,
where F i and G i respectively represent the Fourier transform of a set of training input images and training correlation filtering results. H represents the Fourier transform of the correlation filter. ⊙ and * are the element-wise multiplication operator and complex conjugate, respectively. To minimize Equation (1), we take the partial derivative of the function with respect to H * , and solve the minimality condition as
0 = H w v * i F i w v H w v * G i w v 2 ,
where w and v represent the indices of frequency variables. By solving the optimization problem for H * , we obtain a closed form expression for the MOSSE filter as
H * = i G i F i * i F i F i * .
Finally, the MOSSE filter tracks the target using the peak to sidelobe ratio (PSR) with the strongest signal in the correlation response map.
The trajectory can be obtained by using the center of the object between the current and previous frames estimated using the MOSSE filter. Given the motion trajectory, a geometric analysis finally finds vector crossings for illegal intrusion as shown in Figure 2.
In Figure 2, line 1 represents the a priori assigned line by the user, and line 2 represents the moving trajectory estimated by the MOSSE filter. Parametric equations of lines 1 and 2 can be expressed using parameters t and s, t , s [ 0 , 1 ] , respectively,
P ( t ) = ( 1 t ) P 1 + t P 2 , P ( s ) = ( 1 s ) P 3 + s P 4 ,
where line 1 passes points P 1 and P 2 , and line 2 passes P 3 and P 4 . The crossing point by lines 1 and 2 is determined by solving the following equation:
( 1 t ) P 1 + t P 2 = ( 1 s ) P 1 + s P 2 ,
which can be decomposed into two equations using (x,y) coordinate as
x 1 + t ( x 2 x 1 ) = x 3 + s ( x 4 x 3 ) , y 1 + t ( y 2 y 1 ) = y 3 + s ( y 4 y 3 ) .
This can be rewritten in terms of t and s as
t = ( x 4 x 3 ) ( y 1 y 3 ) ( y 4 y 3 ) ( x 1 x 3 ) ( y 4 y 3 ) ( x 2 x 1 ) ( x 4 x 3 ) ( y 2 y 1 ) ; s = ( x 2 x 1 ) ( y 1 y 3 ) ( y 2 y 1 ) ( x 1 x 3 ) ( y 4 y 3 ) ( x 2 x 3 ) ( x 4 x 3 ) ( y 2 y 1 ) ,
where t and s represent parameters when two lines meet, and the corresponding crossing point is given as
x = x 1 + s ( x 2 x 1 ) , y = y 1 + s ( y 2 y 1 ) , or x = x 1 + t ( x 2 x 1 ) , y = y 1 + t ( y 2 y 1 ) .
In Equation (8), we can decide that two lines cross each other when both t and s have values in [ 0 , 1 ] .
After detecting if the object crosses the pre-assigned line, we can decide whether the object moving direction is inbound or outbound using angle θ as
Object Direction = outbound , 0 < θ < 180 , inbound , 180 < θ < 360 ,
where the inbound direction is considered as an illegal intrusion.
ROI-based vector crossing detection result is shown in Figure 3.

4. Sound Field Sensor-Based Intrusion Detection

In order to detect an abnormal event such as intrusion or fire without any blind zones, we use a sound field sensor. In addition, the sound field sensor technology is advantageous in comparison with the simple sound threshold method using a microphone. The intrusion events happening together with the sound generation such as footstep or window breaking can be detected by the sound threshold method. On the other hand, the proposed sound field sensor system can detect any kind of intrusion event happening without generating a sound signal because it detects the change of sound transfer function between a speaker and microphone depending on the configuration of the objects or the boundary condition of the secured space. Many false alarms are another bottleneck of using the sound threshold method. However, the false alarm can be reduced by using the optimized sound intensity of sinusoidal sounds for sound field sensor even with the environmental sound noises. The sound field sensor generally consists of a sound generator, a microphone, a speaker, and a signal processor [4,45]. We first generate a multi-tone sinusoidal sound. Next, the microphone measures the sound, and the recorded sound signal is transformed into the frequency domain. The transformed sound spectrum is then compared and analyzed in terms of the temporal periodicity.
A general security space consists of an acoustic space defined by the sound wave equation
2 P + k 2 p = 0 ,
with the boundary condition of the initial pressure P ( r 0 ) and velocity U ( r 0 ) , as shown in Figure 4. We can estimate the transfer function of the security space by comparing the input signal to the speaker and the output of the microphone. If an object enters the security area, the sound wave becomes distorted and suffers from various types of distortions including refraction, reflection, and absorption. Such abnormal events can be expressed in terms of the transfer function as shown in Figure 5.
Let the sound source be q ( t ) and its Laplace transformation Q ( s ) . If H ( s ) represents the transfer function of the space and P ( s ) the sound pressure at a certain position, the transfer function of the sound field can be expressed in the log scale as
X = 20 log ( H ( s ) ) = 20 log ( P r m s ( s ) / Q ( s ) ) .
When an object intrudes into the security space, the transfer function of the sound field is distorted according the transfer function of the space
X = 20 log ( H ( s ) ) = 20 log ( P r m s ( s ) / Q ( s ) ) .
The difference between Equations (11) and (12) and its magnitude are given as
Y = X X = 20 log ( P r m s ( s j w ) P r m s ( s j w ) ) ,
Y = 20 log ( P r m s ( j w ) P r m s ( j w ) ) ,
which is equivalent to the difference between sound pressures before and after the intrusion. In other words, the absolute magnitude of the sound pressure ratio is used to detect the intrusion in the security space.
To detect an intrusion, we devise an analysis algorithm that monitors the deviation between the input sound pressure level and the reference one over multi-tone frequencies such as shown in Figure 6. More specifically, we used the signal-to-noise ratio (SNR), where the signal represents the difference between the real sound pressure level and the reference, and the noise represents the maximum deviation during the multiple measurements for the reference sound pressure level as shown in Figure 7. If SNR is larger than a pre-specified threshold, the intrusion event is detected. SNR is the average value of S/N over the multi-tone frequencies
S i n t r e f / N r e f ,
where S i n t r e f represents the difference of averaged T F i n t and T F r e f . T F i n t is the transfer function of the intrusion and T F r e f is the transform function of the reference state. N r e f is the difference between the maximum and minimum of the reference sound pressure.
We calculated the S/N values for all multiple frequencies. S/N denotes the signal-to-noise ratio, and the signal S represents the difference of the transfer function between the reference and each measurement. The noise N is the deviation of TF in the reference. SNR is the average value of S/N over all multiple frequencies. The reference value of SNR for intrusion detection was defined to be 1 because we can decide the occurrence of detection if the signal S exceeds N for all multiple frequencies.

5. Hybrid Sensor-Based Intrusion Detection

We present a hybrid sensor-based surveillance system that combines an image and sound field sensors to detect intrusion in a sealed indoor space without illumination and blind zone. The overall block diagram of the proposed system is shown in Figure 8.
In a dark, empty indoor space, the sound field sensor first detects the intruding events using a multi-tone sound field transfer function. To reduce the power consumption of the sound sensor, we used only 15 % duty cycle in the multi-tone signal as shown in Figure 9.
After the sound field sensor detects an intruding object, the proposed system turns on the light for the sensor to start tracking the detected object. The details of the sensor-based tracking algorithm was described in Section 3.
We can consider a data fusion method between image and sound field sensors. For example, the detection sensitivity of the intrusion event can be improved with the combination of two sensors using ‘OR’ decision algorithm, and the false alarm can be reduced using ‘AND’ decision algorithm. The proposed intrusion detection algorithm is summarized below:
i f v i s u a l s e n s o r = t r u e & & s o u n d f i e l d s e n s o r = t r u e t h e n i n t r u s i o n e v e n t e l s e t h e n n o i n t r u s i o n e v e n t .

6. Experimental Results

To configure the proposed system, the quality requirement of the microphone and speaker are not high for the implementation of the sound field sensor. The sensitivity of 5 mV/Pa and S/N ratio of 58 dB of in the frequency range of 500 to 8 kHz is sufficient for the microphone, and the sound pressure level (SPL) of 96 dB @ 10 cm @ 1W in the frequency range of 500 to 6 kHz is sufficient for the speaker. Most of the commercially available microphones and speakers embedded in a CCTV or smartphone camera can be used for the implementation of a sound field sensor. A Texas Instruments (Dallas, TX, USA) TMS320C674x digital signal processing (DSP) chip or advanced rice machine (ARM) processor having a similar performance are sufficient for the sound field sensing signal process of data acquisition and the sound analysis including the fast Fourier transform (FFT).
To demonstrate the performance, we tested the proposed intrusion detection system in a residential area as shown in Figure 10.
The test area shown in Figure 10 has an empty, sealed room with a light control function connected with the proposed intrusion detection system. We set the moment of the system initialization as the reference time (t = 0 s), and an intruder enters the room after two seconds (t = 2.0 s). The sound field sensor-based test results are summarized in Figure 11, Figure 12 and Figure 13.
During the test of the sound field sensor, we measured the sound pressure level of multi-tone frequencies at every 3.5 s, and the results are summarized in Figure 11a, Figure 12a and Figure 13a. S/N values of the sound pressure level corresponding to the estimated multi-tone frequency are shown in Figure 11b, Figure 12b and Figure 13b.
The final detection results in terms of SNR are shown in Figure 11c, Figure 12c and Figure 13c, where we set the detection threshold value to 1 because we can decide the occurrence of detection if the signal S exceeds N for all multiple frequencies. As shown in Figure 11c, Figure 12c and Figure 13c, the system was initialized at t = 0, and the intruder enters the room at t = 2.0 s. Since image and sound field sensors have different initialization times, we started the test events after all the components start working in the normal condition. The two second delay is acceptable for a practical implementation. However, the system detects intrusion at t = 3.5 s since it measures the sound pressure level at every 3.5 s as shown in Figure 11a, Figure 12a, Figure 13a, Figure 11b, Figure 12b and Figure 13b. The reliability of the sound field variation-based intrusion detection was tested using 20 trials with the similar intrusion scenario. After the system first detects the intruder, it turns on the light of the room for the image sensor to take over the tracking task as shown in Figure 14.
As shown in Figure 11c, Figure 12c and Figure 13c, since the sound field sensor can detect intrusion 3.5 s after the initialization, the light is turned on after 3.5 s as shown in Figure 14c. The image sensor-based detection was carried out using a fixed video camera without an auto white balancing (AWB) function. The detection algorithm generates a single background and frame differences. In Figure 11c, Figure 12c and Figure 13c, the SNR value exceeds the detection threshold, which means that the intruder exists the room. In Figure 14d, since the intruder enters the pre-assigned line, the image sensor detects the intrusion event. As shown in Figure 11c, Figure 12c, Figure 13c and Figure 14d, the system decides the intrusion event when both sound field sensor and image sensors detect the intrusion. The proposed decision process is shown in Figure 15.
Experimental results of [5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,46,47] using an image sensor detected intrusion objects in a general environment with normal illumination. Although many object detection systems adopt a low-light enhancement algorithm as preprocessing [2,3], it still cannot properly work if there are is no illumination at all. Even if there is a very small amount of illumination, a general low-light enhancement algorithm usually amplifies noise as well as image intensity. However, the proposed system uses a sound field sensor to physically turn on the light under no illumination state. As a result, it is possible to detect intrusion by using a simple detection algorithm using an image sensor without a complicated low brightness enhancement algorithm. Although infrared sensors were used to detect intrusion under the low illumination condition [16,48], the infrared sensor cannot detect the intrusion in the blind spot. To improve the problem of this blind spot, Refs. [49,50] employed passive infrared (PIR) sensors in all blind spots. However, it is too expensive to be installed in practical applications. The intrusion detection method using a thermal camera [51] was also proposed for low-light conditions. However, it is sensitive to temperature, and intrusion detection is not possible in blind spots. However, the proposed algorithm can detect intrusion events in the bounded condition of the secured space by using the sound field sensor even if an object intrudes behind the wall in low-light conditions at t = 3.5 s as shown in Figure 15. In Refs. [33,34,52], an acoustic sensor can detect an abnormal situation even if there is no light, but it needs additional data to detect an abnormal sound. In addition, it is very sensitive to sound intensity and surrounding noise. However, the proposed system can detect the intruding object using only sound signals without additional abnormal sound data. An object detection system using optical fiber [53,54] can be used for accurate intrusion detection even in low-light conditions. However, it is not suitable for indoor intrusion detection systems. However, the proposed system can construct an accurate intrusion detection system in a low-illumination environment and blind spot using an inexpensive, low-quality microphone, a speaker and an image sensor.

7. Conclusions

The proposed hybrid sensor-based surveillance system can robustly detect an illegal intrusion by exploiting advantages of both sound field and image sensors. Many security systems have a microphone and a speaker as well as an image sensor. For that reason, the sound field sensing algorithm can be embedded into a DSP module without significantly increasing the cost. Existing image sensor-based surveillance systems cannot avoid mis-detection or failure due to unstable illumination such as low-contrast, dark out, flickering, or noise.
To solve the problem of the image sensor-based detection, the proposed system adopts a sound field sensor to detect an illegal intrusion even without sufficient illumination. The hybrid system first detects an intruding event using a sound filed sensor by analyzing the multi-tone frequency spectrum of the sound pressure and distortion in the transfer function of the sound field. Next, the system automatically turns on the light based on the detected event. The image sensor then takes over the detection task from the image sensor for seamless, accurate analysis and tracking the intruding object. As a result, the proposed surveillance system has two major advantages: (i) it is power-efficient since we do not need to turn on the illuminating light for the initial detection; and (ii) there are no blind spots since the sound field sensor can detect an intruding object behind the obstacle. On the other hand, the source signal may make an unpleasant audible noise. However, this is not a serious problem because the detecting operation is working in the absence of people. The proposed hybrid sensor network-based system can be used for low-power, robust surveillance in various types of security purposes such as an office room after work, empty automobile, safety room in a bank, and armory room. The proposed surveillance system can provide elements such as an alarm message transfer to smartphone.

Author Contributions

H.P., J.P., H.K. and K.-H.P. initiated the research and designed the experiment. K.-H.P. and S.Q.L. evaluated the performance of the proposed algorithm. J.P. wrote the paper.

Acknowledgments

This work was partly supported by Institute for Information and Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (2017-0-00250, Intelligent Defense Boundary Surveillance Technology Using Collaborative Reinforced Learning of Embedded Edge Camera and Image Analysis), and by the R&D program of the Ministry of Trade, Industry and Energy (10047788, Development of Smart Video/Audio Surveillance SoC and Core Component for Onsite Decision Security System).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Castro, J.; Delgado, M.; Medina, J.; Ruiz-Lozano, M. Intelligent surveillance system with integration of heterogeneous information for intrusion detection. Expert Syst. Appl. 2011, 38, 11182–11192. [Google Scholar] [CrossRef]
  2. Ko, S.; Yu, S.; Kang, W.; Park, C.; Lee, S.; Paik, J. Artifact-free Low-light Video Enhancement Using Temporal Similarity and Guide Map. IEEE Trans. Ind. Electron. 2017, 64, 6392–6401. [Google Scholar] [CrossRef]
  3. Park, S.; Moon, B.; Ko, S.; Yu, S.; Paik, J. Low-light image restoration using bright channel prior-based variational Retinex model. EURASIP J. Image Video Process. 2017, 2017, 44. [Google Scholar] [CrossRef]
  4. Lee, S.Q.; Park, K.H.; Kim, K.; Ryu, H.; Wang, S. Practical Implementation of Intrusion Detection Method Based on the Sound Field Variation. In Proceedings of the 20 th International Congress on Sound and Vibration, Bangkok, Thailand, 7–11 July 2013. [Google Scholar]
  5. Wang, J.-X. Research and implementation of intrusion detection algorithm in video surveillance. In Proceedings of the 2016 International Conference on Audio, Language and Image Processing (ICALIP), Shanghai, China, 11–12 July 2016; pp. 345–348. [Google Scholar] [CrossRef]
  6. Zhan, C.; Duan, X.; Xu, S.; Song, Z.; Luo, M. An Improved Moving Object Detection Algorithm Based on Frame Difference and Edge Detection. In Proceedings of the Fourth International Conference on Image and Graphics (ICIG 2007), Sichuan, China, 22–24 August 2007; pp. 519–523. [Google Scholar] [CrossRef]
  7. Yuan, X.; Sun, Z.; Varol, Y.; Bebis, G. A distributed visual surveillance system. In Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance, Miami, FL, USA, 22 July 2003; pp. 199–204. [Google Scholar] [CrossRef]
  8. Dastidar, J.G.; Biswas, R. Tracking Human Intrusion through a CCTV. In Proceedings of the 2015 International Conference on Computational Intelligence and Communication Networks (CICN), Jabalpur, India, 12–14 December 2015; pp. 461–465. [Google Scholar] [CrossRef]
  9. Zhang, Y.L.; Zhang, Z.Q.; Xiao, G.; Wang, R.D.; He, X. Perimeter intrusion detection based on intelligent video analysis. In Proceedings of the 2015 IEEE 15th International Conference on Control, Automation and Systems (ICCAS), Busan, Korea, 13–16 October 2015; pp. 1199–1204. [Google Scholar]
  10. Chen, J.H.; Tseng, T.H.; Lai, C.L.; Hsieh, S.T. An intelligent virtual fence security system for the detection of people invading. In Proceedings of the 2012 IEEE 9th International Conference on Ubiquitous Intelligence & Computing and 9th International Conference on Autonomic & Trusted Computing (UIC/ATC), Fukuoka, Japan, 4–7 September 2012; pp. 786–791. [Google Scholar]
  11. Hariyono, J.; Hoang, V.D.; Jo, K.H. Moving object localization using optical flow for pedestrian detection from a moving vehicle. Sci. World J. 2014, 2014, 196415. [Google Scholar] [CrossRef] [PubMed]
  12. Hossen, M.K.; Tuli, S.H. A surveillance system based on motion detection and motion estimation using optical flow. In Proceedings of the 2016 5th International Conference on Informatics, Electronics and Vision (ICIEV), Dhaka, Bangladesh, 13–14 May 2016; pp. 646–651. [Google Scholar] [CrossRef]
  13. Chauhan, A.K.; Krishan, P. Moving object tracking using gaussian mixture model and optical flow. Int. J. Adv. Res. Comput. Sci. Softw. Eng. 2013, 3, 243–246. [Google Scholar]
  14. Black, J.; Velastin, S.; Boghossian, B. A real time surveillance system for metropolitan railways. In Proceedings of the IEEE Conference on Advanced Video and Signal Based Surveillance (AVSS 2005), Como, Italy, 15–16 September 2005; pp. 189–194. [Google Scholar]
  15. Steiger, O.; Weiss, S.; Felder, J. Real-time understanding of 3D video on an embedded system. In Proceedings of the 2009 IEEE 17th European Signal Processing Conference, Glasgow, UK, 24–28 August 2009; pp. 1518–1522. [Google Scholar]
  16. Liang, K.; Hon, H.; Khairunnisa, M.; Choong, T.; Khairil, H. Real time intrusion detection system for outdoor environment. In Proceedings of the 2012 IEEE Symposium on Computer Applications and Industrial Electronics (ISCAIE), Kota Kinabalu, Malaysia, 3–4 December 2012; pp. 147–152. [Google Scholar]
  17. Chen, C.H.; Chen, T.Y.; Lin, Y.C.; Hu, W.C. Moving-Object Intrusion Detection Based on Retinex-Enhanced Method. In Proceedings of the 2014 Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kitakyushu, Japan, 27–29 August 2014; pp. 281–284. [Google Scholar] [CrossRef]
  18. Ketcham, M.; Ganokratanaa, T.; Srinhichaarnun, S. The intruder detection system for rapid transit using CCTV surveillance based on histogram shapes. In Proceedings of the 2014 11th International Joint Conference on Computer Science and Software Engineering (JCSSE), Chon Buri, Thailand, 14–16 May 2014; pp. 1–6. [Google Scholar] [CrossRef]
  19. Matern, D.; Condurache, A.P.; Mertins, A. Automated Intrusion Detection for Video Surveillance Using Conditional Random Fields. In Proceedings of the MVA2013 IAPR International Conference on Machine Vision Applications, Kyoto, Japan, 20–23 May 2013; pp. 298–301. [Google Scholar]
  20. Chowdhry, D.; Paranjape, R.; Laforge, P. Smart home automation system for intrusion detection. In Proceedings of the 2015 IEEE 14th Canadian Workshop on Information Theory (CWIT), St. John’s, NL, Canada, 6–9 July 2015; pp. 75–78. [Google Scholar]
  21. Zieger, C.; Brutti, A.; Svaizer, P. Acoustic based surveillance system for intrusion detection. In Proceedings of the 2009 Sixth IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS’09), Genova, Italy, 2–4 September 2009; pp. 314–319. [Google Scholar]
  22. Zhang, J.; Song, G.; Qiao, G.; Meng, T.; Sun, H. An indoor security system with a jumping robot as the surveillance terminal. IEEE Trans. Consum. Electron. 2011, 57. [Google Scholar] [CrossRef]
  23. Barry, A.S.; Czechanski, J. Ground surveillance radar for perimeter intrusion detection. In Proceedings of the 2000 19th IEEE Digital Avionics Systems Conference (DASC), Philadelphia, PA, USA, 7–13 October 2000. [Google Scholar]
  24. Howeverler, W.; Poitevin, P.; Bjomholt, J. Benefits of wide area intrusion detection systems using FMCW radar. In Proceedings of the 2007 41st Annual IEEE International Carnahan Conference on Security Technology, Ottawa, ON, Canada, 8–11 October 2007; pp. 176–182. [Google Scholar]
  25. Howeverler, W. Design considerations for intrusion detection wide area surveillance radars for perimeters and borders. In Proceedings of the 2008 IEEE Conference on Technologies for Homeland Security, Waltham, MA, USA, 12–13 May 2008; pp. 47–50. [Google Scholar]
  26. Bahl, P.; Padmanabhan, V.N. RADAR: An in-building RF-based user location and tracking system. In Proceedings of the Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (INFOCOM 2000), Tel Aviv, Israel, 26–30 March 2000; Volume 2, pp. 775–784. [Google Scholar]
  27. Want, R.; Hopper, A.; Falcao, V.; Gibbons, J. The active badge location system. ACM Trans. Inf. Syst. (TOIS) 1992, 10, 91–102. [Google Scholar] [CrossRef]
  28. Liu, L.; Zhang, W.; Deng, C.; Yin, S.; Wei, S. BriGuard: A lightweight indoor intrusion detection system based on infrared light spot displacement. IET Sci. Meas. Technol. 2014, 9, 306–314. [Google Scholar] [CrossRef]
  29. Youssef, M.; Mah, M.; Agrawala, A. Challenges: Device-free passive localization for wireless environments. In Proceedings of the 13th Annual ACM International Conference on Mobile Computing and Networking, Montréal, QC, Canada, 9–14 September 2007; pp. 222–229. [Google Scholar]
  30. Kosba, A.E.; Abdelkader, A.; Youssef, M. Analysis of a device-free passive tracking system in typical wireless environments. In Proceedings of the 2009 IEEE 3rd International Conference on New Technologies, Mobility and Security (NTMS), Cairo, Egypt, 20–23 December 2009; pp. 1–5. [Google Scholar]
  31. Kosba, A.E.; Saeed, A.; Youssef, M. Robust WLAN device-free passive motion detection. In Proceedings of the 2012 IEEE Wireless Communications and Networking Conference (WCNC), Lugano, Switzerland, 19–23 March 2012; pp. 3284–3289. [Google Scholar]
  32. Seifeldin, M.; Saeed, A.; Kosba, A.E.; El-Keyi, A.; Youssef, M. Nuzzer: A large-scale device-free passive localization system for wireless environments. IEEE Trans. Mob. Comput. 2013, 12, 1321–1334. [Google Scholar] [CrossRef]
  33. Andersson, M.; Ntalampiras, S.; Ganchev, T.; Rydell, J.; Ahlberg, J.; Fakotakis, N. Fusion of acoustic and optical sensor data for automatic fight detection in urban environments. In Proceedings of the 2010 IEEE 13th Conference on Information Fusion (FUSION), Edinburgh, UK, 26–29 July 2010; pp. 1–8. [Google Scholar]
  34. Azzam, R. Visual/Acoustic Detection and Localisation in Embedded Systems. 2016. Available online: http://dspace.lib.cranfield.ac.uk/handle/1826/10672 (accessed on 22 May 2018).
  35. Mahalanobis, A.; Kumar, B.V.; Song, S.; Sims, S.; Epperson, J. Unconstrained correlation filters. Appl. Opt. 1994, 33, 3751–3759. [Google Scholar] [CrossRef] [PubMed]
  36. Bolme, D.S.; Draper, B.A.; Beveridge, J.R. Average of synthetic exact filters. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 2105–2112. [Google Scholar]
  37. Bolme, D.S.; Beveridge, J.R.; Draper, B.A.; Lui, Y.M. Visual object tracking using adaptive correlation filters. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010; pp. 2544–2550. [Google Scholar]
  38. Henriques, J.F.; Caseiro, R.; Martins, P.; Batista, J. Exploiting the circulant structure of tracking-by-detection with kernels. In European Conference on Computer Vision; Springer: Heidelberg, Germany, 2012; pp. 702–715. [Google Scholar]
  39. Henriques, J.F.; Caseiro, R.; Martins, P.; Batista, J. High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 583–596. [Google Scholar] [CrossRef] [PubMed]
  40. Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th international joint conference on Artificial intelligence (IJCAI’81), Vancouver, BC, Canada, 24–28 August 1981; Volume 2. [Google Scholar]
  41. Hwangbo, M.; Kim, J.S.; Kanade, T. Inertial-aided KLT feature tracking for a moving camera. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2009), St. Louis, MO, USA, 10–15 October 2009; pp. 1909–1916. [Google Scholar]
  42. Comaniciu, D.; Ramesh, V.; Meer, P. Real-time tracking of non-rigid objects using mean shift. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hilton Head Island, SC, USA, 15 June 2000; Volume 2, pp. 142–149. [Google Scholar]
  43. Faragher, R. Understanding the Basis of the Kalman Filter Via a Simple and Intuitive Derivation [Lecture Notes]. IEEE Signal Process. Mag. 2012, 29, 128–132. [Google Scholar] [CrossRef]
  44. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef]
  45. Park, K.H.; Lee, S.Q. Early stage fire sensing based on audible sound pressure spectra with multi-tone frequencies. Sens. Actuators A Phys. 2016, 247, 418–429. [Google Scholar] [CrossRef]
  46. Pham, C. Low cost Wireless Image Sensor Networks for visual surveillance and intrusion detection applications. In Proceedings of the 2015 IEEE 12th International Conference on Networking, Sensing and Control (ICNSC), Taipei, Taiwan, 9–11 April 2015; pp. 376–381. [Google Scholar]
  47. Pandey, P.; Laxmi, V. Design of low cost and power efficient Wireless vision Sensor for surveillance and monitoring. In Proceedings of the 2016 International Conference on Computation of Power, Energy Information and Commuincation (ICCPEIC), Chennai, India, 20–21 April 2016; pp. 113–117. [Google Scholar]
  48. Raheja, J.L.; Deora, S.; Chaudhary, A. Cross border intruder detection in hilly terrain in dark environment. Opt. Int. J. Light Electron Opt. 2016, 127, 535–538. [Google Scholar] [CrossRef]
  49. Prati, A.; Vezzani, R.; Benini, L.; Farella, E.; Zappi, P. An integrated multi-modal sensor network for video surveillance. In Proceedings of the Third ACM International Workshop on Video Surveillance & Sensor Networks, Hilton, Singapore, 11 November 2005; pp. 95–102. [Google Scholar]
  50. Song, B.; Choi, H.; Lee, H.S. Surveillance tracking system using passive infrared motion sensors in wireless sensor network. In Proceedings of the IEEE International Conference on Information Networking (ICOIN 2008), Busan, Korea, 23–25 January 2008; pp. 1–5. [Google Scholar]
  51. Fitzmaurice, M. The Use of Thermal Imagery for Intrusion Detection in a Public Transit Environment. In Proceedings of the 2015 Joint Rail Conference, San Jose, CA, USA, 23–26 March 2015; p. V001T06A009. [Google Scholar]
  52. Lefter, I.; Burghouts, G.J.; Rothkrantz, L.J. Learning the fusion of audio and video aggression assessment by meta-information from human annotations. In Proceedings of the 2012 IEEE 15th International Conference on Information Fusion (FUSION), Singapore, 9–12 July 2012; pp. 1527–1533. [Google Scholar]
  53. Lu, Y.; Zhu, T.; Chen, L.; Bao, X. Distributed vibration sensor based on coherent detection of phase-OTDR. J. Lightwave Technol. 2010, 28, 3243–3249. [Google Scholar]
  54. Owen, A.; Duckworth, G.; Worsley, J. OptaSense: Fibre optic distributed acoustic sensing for border monitoring. In Proceedings of the 2012 European Intelligence and Security Informatics Conference (EISIC), Odense, Denmark, 22–24 August 2012; pp. 362–364. [Google Scholar]
Figure 1. Proposed hybrid sensor-based intrusion detection system: (a) in the dark, sealed room, the set of a microphone and speaker in the lower left corner periodically generates a stationary wave and the sound field sensor in the upper right corner detects if the door is opened; (b) after the sound field sensor first detects the intrusion, the system automatically turns on the light, and the image sensor in the upper left corner then starts tracking the intruding object.
Figure 1. Proposed hybrid sensor-based intrusion detection system: (a) in the dark, sealed room, the set of a microphone and speaker in the lower left corner periodically generates a stationary wave and the sound field sensor in the upper right corner detects if the door is opened; (b) after the sound field sensor first detects the intrusion, the system automatically turns on the light, and the image sensor in the upper left corner then starts tracking the intruding object.
Symmetry 10 00181 g001
Figure 2. Detection of object moving trajectory crossing a pre-specified line.
Figure 2. Detection of object moving trajectory crossing a pre-specified line.
Symmetry 10 00181 g002
Figure 3. Result of intrusion detection using ROI-based vector crossing: (a) before intrusion; (b) inbound crossing; and (c) outbound crossing.
Figure 3. Result of intrusion detection using ROI-based vector crossing: (a) before intrusion; (b) inbound crossing; and (c) outbound crossing.
Symmetry 10 00181 g003
Figure 4. Sound source and boundary conditions for the acoustic space.
Figure 4. Sound source and boundary conditions for the acoustic space.
Symmetry 10 00181 g004
Figure 5. Estimated distortions due to the intrusion: (a) before and (b) after intrusion.
Figure 5. Estimated distortions due to the intrusion: (a) before and (b) after intrusion.
Symmetry 10 00181 g005
Figure 6. A sound source containing multiple tones: (a) frequency domain and (b) time domain.
Figure 6. A sound source containing multiple tones: (a) frequency domain and (b) time domain.
Symmetry 10 00181 g006
Figure 7. Signal-to-noise ratio (SNR).
Figure 7. Signal-to-noise ratio (SNR).
Symmetry 10 00181 g007
Figure 8. Block diagram of the proposed hybrid sensor-based intrusion detection system.
Figure 8. Block diagram of the proposed hybrid sensor-based intrusion detection system.
Symmetry 10 00181 g008
Figure 9. The multi-tone signal with 15% duty cycle.
Figure 9. The multi-tone signal with 15% duty cycle.
Symmetry 10 00181 g009
Figure 10. A residential area for the test.
Figure 10. A residential area for the test.
Symmetry 10 00181 g010
Figure 11. Test 1 of intrusion detection using the sound field sensor: (a) sound spectra versus various multi-tone frequencies; (b) S/N values versus multi-tone frequencies; and (c) SNR values versus the time of observations.
Figure 11. Test 1 of intrusion detection using the sound field sensor: (a) sound spectra versus various multi-tone frequencies; (b) S/N values versus multi-tone frequencies; and (c) SNR values versus the time of observations.
Symmetry 10 00181 g011
Figure 12. Test 2 of intrusion detection using the sound field sensor: (a) sound spectra versus various multi-tone frequencies; (b) S/N values versus multi-tone frequencies; and (c) SNR values versus the time of observations.
Figure 12. Test 2 of intrusion detection using the sound field sensor: (a) sound spectra versus various multi-tone frequencies; (b) S/N values versus multi-tone frequencies; and (c) SNR values versus the time of observations.
Symmetry 10 00181 g012
Figure 13. Test 3 of intrusion detection using the sound field sensor: (a) sound spectra versus various multi-tone frequencies; (b) S/N values versus multi-tone frequencies; and (c) SNR values versus the time of observations.
Figure 13. Test 3 of intrusion detection using the sound field sensor: (a) sound spectra versus various multi-tone frequencies; (b) S/N values versus multi-tone frequencies; and (c) SNR values versus the time of observations.
Symmetry 10 00181 g013
Figure 14. Test result of image sensor-based detection after the sound field sensor first detects the intruder: (a) 1 s; (b) 2 s; (c) 3.5 s; (d) 7 s; (e) 10.5 s; and (f) 14 s.
Figure 14. Test result of image sensor-based detection after the sound field sensor first detects the intruder: (a) 1 s; (b) 2 s; (c) 3.5 s; (d) 7 s; (e) 10.5 s; and (f) 14 s.
Symmetry 10 00181 g014
Figure 15. Decision process in the timeline.
Figure 15. Decision process in the timeline.
Symmetry 10 00181 g015

Share and Cite

MDPI and ACS Style

Park, H.; Park, J.; Kim, H.; Lee, S.Q.; Park, K.-H.; Paik, J. Hybrid Sensor Network-Based Indoor Surveillance System for Intrusion Detection. Symmetry 2018, 10, 181. https://doi.org/10.3390/sym10060181

AMA Style

Park H, Park J, Kim H, Lee SQ, Park K-H, Paik J. Hybrid Sensor Network-Based Indoor Surveillance System for Intrusion Detection. Symmetry. 2018; 10(6):181. https://doi.org/10.3390/sym10060181

Chicago/Turabian Style

Park, Hasil, Jinho Park, Heegwang Kim, Sung Q Lee, Kang-Ho Park, and Joonki Paik. 2018. "Hybrid Sensor Network-Based Indoor Surveillance System for Intrusion Detection" Symmetry 10, no. 6: 181. https://doi.org/10.3390/sym10060181

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop