Next Article in Journal
Network Challenges for Cyber Physical Systems with Tiny Wireless Devices: A Case Study on Reliable Pipeline Condition Monitoring
Previous Article in Journal
Informational Analysis for Compressive Sampling in Radar Imaging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Detection of Abnormal Events via Optical Flow Feature Analysis

1
School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
2
Institut Charles Delaunay-LM2S-UMR STMR 6279 CNRS, University of Technology of Troyes, Troyes 10004, France
*
Author to whom correspondence should be addressed.
Sensors 2015, 15(4), 7156-7171; https://doi.org/10.3390/s150407156
Submission received: 17 January 2015 / Accepted: 16 March 2015 / Published: 24 March 2015
(This article belongs to the Section Physical Sensors)

Abstract

: In this paper, a novel algorithm is proposed to detect abnormal events in video streams. The algorithm is based on the histogram of the optical flow orientation descriptor and the classification method. The details of the histogram of the optical flow orientation descriptor are illustrated for describing movement information of the global video frame or foreground frame. By combining one-class support vector machine and kernel principal component analysis methods, the abnormal events in the current frame can be detected after a learning period characterizing normal behaviors. The difference abnormal detection results are analyzed and explained. The proposed detection method is tested on benchmark datasets, then the experimental results show the effectiveness of the algorithm.

1. Introduction

With the development of human society, the security challenges in public scenes are gradually increased. In the last several decades, the cost for camera and network communication has been significantly reduced. Furthermore, video camera sensors are used wildly in many areas of human life. However, the traditional way of visual surveillance is a labor-intensive non-stop human attention work, and the efficiency is low. Thus, tackling visual surveillance problems automatically by adopting video processing technique plays a paramount role in the computer vision research area. The scientific challenges in this area include developing strategies to ensure public safety and detecting the abnormal behavior of an individual or a group.

The methods modeling behavior by adopting the Bayesian network were introduced in [14]. In [5], delta-dual hierarchical Dirichlet processes (dDHDP) were used to detect abnormal activity patterns in the field of visual features. By analyzing the statistical property, abnormal events were detected. Successful results were obtained on several scenes, but the prediction was based on the complicated probability model.

Some researchers took notice of spatio-temporal features. In [6], the movement was represented by a co-occurrence matrix and modeled by a Markov random field model. Abnormal activities, which were the significant changes in the scene, were detected. The work was similar to the foreground subtraction method in a non-stable background scene.

Low-level motion features have also gained attention for detecting abnormal events. In [7], an algorithm was proposed to detect the action of a single individual, such as hand-waving, boxing, etc. In [8], bionics technology was applied to model the superior colliculus (SC) to discover abnormalities in the panoramic image. These methods were based on partial information, such as contained in small observation windows of the image. In other words, they did not employ global information within the frame.

Based on the feature representation and the pattern classification, an abnormal detection method is proposed in this paper. The datasets used in our work are Performance Evaluation of Tracking and Surveillance (PETS2009) [9] and University of Minnesota (UMN) dataset [10], as shown in Figure 1. A normal scene means that the individuals are promenading in different directions. In the abnormal scenes of the PETS dataset, people are moving (walking or running) in the same direction, while the UMN abnormal scene means that the individuals are running. The proposed algorithm is composed of two parts. Firstly, the visual features are extracted without object tracking. Secondly, abnormal events are detected by classifying the extracted features. In fact, one-class support vector machine (SVM) and principal component analysis (PCA) are used in this paper. By learning the normal behaviors, the classifiers detect the abnormal ones. The rest of the paper is organized as follows. In Section 2, the optical flow-based feature is proposed. In Section 3, a one-class SVM classification method and a kernel PCA for novelty detection method are presented, and thus, the corresponding abnormal detection framework is described. In Section 4, the experimental results and the analysis are given. Finally, the paper is concluded with future works in Section 5.

2. Feature Selection for Abnormal Detection

Because optical flow can represent the movement information of actions, we choose the Horn-Schunck (HS) [11] method to compute it. The HS method formulates the optical flow as a global energy functional for the gray image sequence:

E = [ ( I x u + I y v + I t ) 2 + α ( u 2 + v 2 ) ] d x d y
where Ix,Iy and It are the derivatives of the image intensity values along the horizontal direction x, vertical direction y and time t dimension, respectively, u,v are the horizontal and vertical components of the optical flow, α is a regularization constant.

In [12], the abnormal global frame detection was proposed, and the frame covariance matrix descriptor was constructed based on the optical flow. In this paper, we analyze the details of the histogram of the optical flow orientation (HOFO) with different parameters. The optical flow orientation features of an image are extracted at fixed resolution and then gathered into a high dimensional feature vector. A 2 × 2 rectangular cell HOFO descriptor of the original image or the foreground image is shown in Figure 2. By a trigonometric function, the orientation is computed from horizontal and vertical optical flow. The orientation is voted into n bins in 0°–360° (noted as signed angle) or 0°–180° (noted as unsigned angle). Nine bins are chosen in this paper. The optical flow magnitude of a pixel is considered as a weight coefficient in the voting process. A block contains hb × wb cells; it is set as 2 × 2 in this paper to present the spatial information of the HOFO. The HOFO dimension of one block is 36 (9×2×2). The HOFO feature describes the global movement information of one frame (or foreground frame) by gathering the histogram of the optical flow orientation in the sub-frame (block). Because the movement of an abnormal image usually has a bigger value of the optical flow strength and more directions, the element in the HOFO vector of an abnormal image is generally higher than a normal one. Four normalization schemes are chosen when HOFO is calculated:

L 1 norm : v v v 1 + ε
L 1 sqrt : v v v 1 + ε
L 2 norm : v v v 2 2 + ε
L 2 Hys : v v v 2 2 + ε
{ i f v 2 2 > 0.3 , v 2 2 = 0.3 i f v 2 2 > 0.4 , v 2 2 = 0.4
where v is the HOFO descriptor vector before being normalized and ε is a small constant to make the calculation reasonable.

3. Abnormal Detection Method Based on Optical Flow Analysis

The objective of the abnormal event detection problem is to find the samples that are different from the training ones. Thus, two classification methods, the one-class support vector machine (one-class SVM) and kernel principal component analysis (KPCA) for novelty detection, suit this application. In this section, we firstly introduce these two methods and then propose the abnormal detection algorithm in video sequences.

3.1. One-Class Support Vector Machine

Vapnik and Lerner initially proposed the support vector machine for classification or regression based on statistical learning theory [13]. Later, by adopting the kernel methods, the support vector machine was extended to deal with non-linear problems [1416]. Thus, the non-linear one-class support vector machine is one development of the basic SVM theory to find out an appropriate region containing most of the data drawn from an unknown probability distribution. The problem of the non-linear one-class support vector machine can be presented as [17,18]:

min ω , ξ , ρ 1 2 w 2 + 1 ν n i = 1 n ξ i ρ subject to w , Φ ( x i ) ρ ξ i , ξ i 0
where xiχ, i ∈ [1 … n] are n training samples in the original data space χ. ξi is the slack variable for penalizing the outliers. The hyperparameter ν ∈ (0,1; is the weight for the controlled slack variable. Φ is a map from the non-empty set of the original input data χ to a feature space Sensors 15 07156i1. For computing dot products in Sensors 15 07156i1, the kernel function is defined as κ(x,x′) = 〈Φ(x) · Φ(x′)〉. The decision function is defined as:
f ( x ) = sgn ( i = 1 n α i κ ( x i , x ) ρ )
where x is a vector in the input data space χ and κ is the kernel function. The Gaussian kernel is used to deal with the non-linear problem in this paper.

3.2. Kernel Principal Component Analysis

Kernel principal component analysis [19,20] extends the standard PC A to non-linear data distributions. Before performing PCA, map the n datum points xi ∈ ℝd to a higher-dimensional feature space Sensors 15 07156i2 where standard PCA is performed:

x i Φ ( x i )
In kernel PCA, an eigenvector V of the covariance matrix in Sensors 15 07156i2 is a linear combination of Φ(xi):
V = i = 1 n α i Φ ˜ ( x i )
Φ ˜ ( x i ) = Φ ( x i ) 1 n r = 1 n Φ ( x r )
where αi is the component of a vector α. This vector is an eigenvector of the matrix (xi,xj) = 〈 (Φ̃(xi) · Φ̃(xj)〉.

For novelty detection [21], the reconstruction error p(Φ̃); can be defined as:

p ( x ) = p S ( x ) l = 1 q f l ( x ) 2
subject to:
p S ( x ) = Φ ( x ) 1 n r = 1 n Φ ( x r ) 2
f l ( x ) = Φ ˜ ( x ) V l = i = 1 n α i l [ k ( x , x i ) 1 n r = 1 n k ( x i , x r ) 1 n r = 1 n k ( x , x r ) + 1 n 2 r , s = 1 n k ( x r , x s ) ]
where fl(x) is the projection of Φ̃(x) on the eigenvector Vl, and the index l denotes the l-th eigenvector, with l = 1 for the eigenvector with the largest eigenvalue.

3.3. Abnormal Detection Algorithm Based on Optical Flow Feature Classification

By adopting the histogram of the optical flow orientation feature descriptor and these two novel detection methods, the abnormal event detection method in video streams is summarized in Algorithm 1. [I1Im] is the set of normal training frames where the individuals are walking in all directions. The abnormal samples are the frames where the individuals are moving toward one direction or running. This definition of an abnormal event indicates that the individuals are attracted by some particular event or escaping from a dangerous zone.


Algorithm 1 Abnormal detection algorithm.

Require:
Image I.
1:Compute the optical flow of training frame [I1,…, Im] via the HS method.
[ I 1 , I 2 , , I m ] [ O 1 , O 2 , , O m ]
2:Compute the histogram of the optical flow orientation of the original image or foreground image.
[ O 1 , O 2 , , O m ] [ H 1 , , H m ]
3:
(1)

SVM method: training data are learned by the one-class SVM method to obtain the support vectors.

[ H 1 , , H m ] SVM support vector [ S 1 , , S m 1 ]

(2)

PCA method: compute the principal components by the KPCA method and measure the squared distance.

[ H 1 , , H m ] PCA principal component [ P 1 , , P m 2 ]

4:
(1)

SVM method: Each incoming frame Hn,…,q is classified by one-class SVM. The abnormal event or normal event is detected in the current image.

(2)

PCA method: each incoming frame Hn,…,q is classified by KPCA.

5:The detection results are filtered by state transition restriction.

Step 1

Compute the optical flow of each frame via the Horn–Schunck (HS) optical flow method in the gray scale.

Step 2

Calculate the histogram of the optical flow orientation (HOFO) of each frame. The sketch image for choosing the HOFO feature in the original image or in the foreground image is shown in Figure 2. If the HOFO descriptor is computed on the foreground image, the optical flow in the background is zero. Thus, the background area is not considered, and then, computing time is saved.

Step 3

The one-class support vector machine or kernel principal component analysis method is used to classify feature samples of the incoming video frames. The flowchart of our method is shown in Figure 3.

SVM method

The training feature samples are extracted from the normal images, which include HOFO in the original images or in the foreground images. The HOFO feature of the k-th frame is labeled as Hk. The training samples H1,…,m, m > 1 are gathered, and then, the support vectors are obtained in the SVM training step. Based on the support vectors, the incoming feature samples Hn,…,q are classified.

PCA method

The normal training feature samples for KPCA are mapped into a high-dimensional feature space. In this space, PCA extracts the principal components of the data distribution. Then, the squared distance of each testing sample to the corresponding principal subspace is measured for novelty detection [21].

Step 4

If a normal event or an abnormal event is observed, it means that the video clip holds one state in several consecutive frames. Thus, we use a state transition restriction method by presetting a threshold N to filter the short fluctuation clip. If the number of the predicted abnormal frames after a normal video clip is larger than N, the state of the abnormal detection system is changed from “normal” to “abnormal”.

4. Experimental Results and Analyses

This section presents the experimental results and the analyses of the proposed abnormal detection method. The datasets PETS [9] and UMN [10] are used.

4.1. PETS Dataset

The detection accuracies of PETS dataset under different features and classification methods are shown in Figure 4. The HOFO features are obtained under different conditions, which include original image, foreground image, signed angle, unsigned angle, L1-norm, L1-sqrt, L2-norm, L2hys-0.3, L2hys-0.4 and none normalization. The KPCA novelty detection method obtains less accuracy than the one-class SVM. The best accuracy of the KPCA results is 89.5% under the condition of the original image, signed angle and L1-norm normalization. The best accuracy of the one-class SVM is 96.8% under the condition of the original image, singed angle and L2hys-0.4 normalization. Examples of this high dimension feature space are illustrated in Figure 5 by using the projection of the three largest principal components. The training normal data (labeled as a blue cross) are confused with the testing normal data (labeled as a cyan diamond) and the testing abnormal data (labeled as a red rectangle). In a word, the training data are mixed with the test-abnormal data. One-class SVM has a slack variable, which tunes the number of acceptable outliers of the training data. This soft margin strategy makes the one-class SVM obtain more precision.

The results adjusted by restriction of the state transition are shown in Figure 6. As shown in the figure, the fluctuations between the “abnormal” and “normal” state are reduced. The detection results of the PETS scene are shown in Figure 7.

4.2. UMN Dataset

The HOFO descriptor can represent not only the information of optical flow orientation, but also the optical flow magnitude. The results of the benchmark dataset UMN are shown in Figure 8. The HOFO descriptor can deal with the abnormal scene in which people are running in all directions.

For the lawn scene, the detection accuracies of different conditions are shown in Figure 9. One-class SVM and KPCA classification methods can get great accuracy without the state transition restriction strategy.

For the indoor scene and the plaza scene, the detected accuracy of different conditions are shown in Figures 10 and 11, respectively The restriction of the state transition improves the accuracy In summary, the KPCA method is generally better than one-class SVM for abnormal detection in these experiments. Furthermore, the data distributions need to be considered.

The performance summary of the UMN dataset compared with the state-of-the-art methods is shown in Table 1. The results in the table are not post-processed by the state transition restriction strategy. Our method obtains great accuracy for all three scenes in the UMN dataset.

5. Conclusions

We propose an abnormal detection method by analyzing the optical flow feature. The method is based on two components, computing the histogram of the optical flow orientation (HOFO) and applying one-class support vector machine and kernel principal component analysis for classification. The HOFO feature is computed in the original frame or foreground image. Moreover, the details of the parameters are analyzed. The algorithm has been tested on several video sequences, and the experimental results show the effectiveness of the algorithm. From the experimental results, we can see that the normalization schemes, none and L2hys-0.4, generally get the best performance. The detection results under the signed angle and original image condition is broadly acceptable. In general, the KPCA novelty detection method is as good as one-class SVM, but under a certain distribution of the data, the one-class SVM can obtain more accurate performance.

Future work will aim at reducing the false alarms and training the samples online. Two solutions are under consideration: capturing more efficient features based on the optical flow or replacing the optical flow by other approaches that can represent the information of the events. Online learning is also urgent. Due to the large amount of normal examples, it is hard to learn the training samples as one batch. Moreover, our method focuses on detecting global abnormal events, but detecting local abnormal events is also important. Improving the method to detect the global and local abnormal events jointly is also necessary.

Acknowledgements

This work is partially supported by the SURECAP CPER project (fonction de surveillance dans les réseaux de capteurs sans fil via contrat de plan Etat-Région) and the Platform CAPSEC (capteurs pour la sécurité) funded by Région Champagne-Ardenne and FEDER (fonds européen de développement régional), the Fundamental Research Funds for the Central Universities and the National Natural Science Foundation of China (Grant No. U1435220).

Author Contributions

Tian Wang and Hichem Snoussi designed the experiments and wrote the paper; Tian Wang performed the experiments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Utasi, Á.; Czúni, L. Detection of unusual optical flow patterns by multilevel hidden markov models. Opt. Eng. 2010, 49, 017201. [Google Scholar]
  2. Kosmopoulos, D.; Chatzis, S.P. Robust visual behavior recognition. IEEE Signal Process. Mag. 2010, 27, 34–45. [Google Scholar]
  3. Xiang, T.; Gong, S. Incremental and adaptive abnormal behaviour detection. Comput. Vis. Image Underst. 2008, 111, 59–73. [Google Scholar]
  4. Jiménez-Hernández, H.; González-Barbosa, J.J.; Garcia-Ramírez, T. Detecting abnormal vehicular dynamics at intersections based on an unsupervised learning approach and a stochastic model. Sensors 2010, 10, 7576–7601. [Google Scholar]
  5. Haines, T.S.; Xiang, T. Delta-dual hierarchical dirichlet processes: A pragmatic abnormal behaviour detector. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011; pp. 2198–2205.
  6. Benezeth, Y.; Jodoin, P.M.; Saligrama, V. Abnormality detection using low-level co-occurring events. Pattern Recog. Lett. 2011, 32, 423–431. [Google Scholar]
  7. Schuldt, C.; Laptev, I.; Caputo, B. Recognizing human actions: A local svm approach. Proceedings of the 17th International Conference on Pattern Recognition (ICPR), Cambridge, UK, 23–26 August 2004; Volume 3, pp. 32–36.
  8. Casey, M.C.; Hickman, D.L.; Pavlou, A.; Sadler, J.R. Small-scale anomaly detection in panoramic imaging using neural models of low-level vision. Proceedings of the SPIE Defense, Security, and Sensing (DSS), Orlando, FL, USA, 25–29 April 2011.
  9. PETS 2009 Benchmark Data. Multisensor Sequences Containing Different Crowd Activities. Available online: http://www.cvg.rdg.ac.Uk/PETS2009/a.html (accessed on 20 March 2015).
  10. Unusual Crowd Activity Dataset of University of Minnesota. Available online: http://mha.cs.umn.edu/Movies/Crowd-Activity-All.avi (accessed on 20 March 2015).
  11. Horn, B.K.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar]
  12. Wang, T.; Chen, J.; Snoussi, H. Online detection of abnormal events in video streams. J. Electr. Cmoput. Eng. 2013. [Google Scholar] [CrossRef]
  13. Vapnik, V.N.; Lerner, A. Pattern recognition using generalized portrait method. Autom. Remote Control 1963, 24, 774–780. [Google Scholar]
  14. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A training algorithm for optimal margin classifiers. Proceedings of the ACM Fifth Annual Workshop on Computational Learning Theory (COLT), Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152.
  15. Piciarelli, C.; Micheloni, C.; Foresti, G.L. Trajectory-based anomalous event detection. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1544–1554. [Google Scholar]
  16. Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Chambridge, UK, 2000. [Google Scholar]
  17. Schölkopf, B.; Platt, J.C.; Shawe-Taylor, J.; Smola, A.J.; Williamson, R.C. Estimating the support of a high-dimensional distribution. Neural Comput. 2001, 13, 1443–1471. [Google Scholar]
  18. Canu, S.; Grandvalet, Y.; Guigue, V.; Rakotomamonjy, A. Svm and Kernel Methods Matlab Toolbox; Perception Systè mes et Information, INSA de Rouen: Rouen, France, 2005. [Google Scholar]
  19. Schölkopf, B.; Smola, AJ. Learning with Kernels: Support Vector Machines, Regularization, Optimization and Beyond; MIT Press: Cambridge, MA, USA, 2002. [Google Scholar]
  20. Schölkopf, B.; Smola, A.; Müller, K.R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 1998, 10, 1299–1319. [Google Scholar]
  21. Hoffmann, H. Kernel PCA for novelty detection. Pattern Recog. 2007, 40, 863–874. [Google Scholar]
  22. Mehran, R.; Oyama, A.; Shah, M. Abnormal crowd behavior detection using social force model. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Miami, FL, USA, 20–25 June 2009; pp. 935–942.
  23. Cong, Y.; Yuan, J.; Liu, J. Sparse reconstruction cost for abnormal event detection. Proceedings of the IEEE Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 3449–3456.
  24. Shi, Y.; Gao, Y.; Wang, R. Real-time abnormal event detection in complicated scenes. Proceedings of the 20th International Conference on Pattern Recognition (ICPR), Istanbul, Turkey, 23–26 August 2010; pp. 3653–3656.
Figure 1. Normal and abnormal frames: (a,c) normal frames in the PETS and UMN datasets; the individuals are walking in all directions; (b) PETS abnormal frame; the individuals are walking in the same direction; (d) UMN abnormal frame; the individuals are running in all directions.
Figure 1. Normal and abnormal frames: (a,c) normal frames in the PETS and UMN datasets; the individuals are walking in all directions; (b) PETS abnormal frame; the individuals are walking in the same direction; (d) UMN abnormal frame; the individuals are running in all directions.
Sensors 15 07156f1 1024
Figure 2. Histogram of the optical flow orientation (HOFO) feature descriptor on the original image or the foreground image.
Figure 2. Histogram of the optical flow orientation (HOFO) feature descriptor on the original image or the foreground image.
Sensors 15 07156f2 1024
Figure 3. The flowchart of the proposed feature classification-based abnormal detection method.
Figure 3. The flowchart of the proposed feature classification-based abnormal detection method.
Sensors 15 07156f3 1024
Figure 4. The accuracy of the PETS scene under different classification methods and different features: (a) HOFO descriptors are computed on the original and the foreground image, and one-class SVM is taken as the classification method, the maximum accuracy is 96.8% ; (b) HOFO descriptors are computed on the original image and the foreground image, and KPCA is taken as the classification method, the maximum accuracy is 89.5%.
Figure 4. The accuracy of the PETS scene under different classification methods and different features: (a) HOFO descriptors are computed on the original and the foreground image, and one-class SVM is taken as the classification method, the maximum accuracy is 96.8% ; (b) HOFO descriptors are computed on the original image and the foreground image, and KPCA is taken as the classification method, the maximum accuracy is 89.5%.
Sensors 15 07156f4 1024
Figure 5. The normal data for training, normal data for testing and abnormal data for testing of the three largest principal components in the PETS scene.
Figure 5. The normal data for training, normal data for testing and abnormal data for testing of the three largest principal components in the PETS scene.
Sensors 15 07156f5 1024
Figure 6. Detection results of the PETS scene. “1” means normal; “–1” means abnormal: (a) original detection results; (b) detection results that are post-processed by the state transition restriction. The consecutive frame number threshold N = 8.
Figure 6. Detection results of the PETS scene. “1” means normal; “–1” means abnormal: (a) original detection results; (b) detection results that are post-processed by the state transition restriction. The consecutive frame number threshold N = 8.
Sensors 15 07156f6 1024
Figure 7. Detection results of the PETS scene: (a) training frames; people are walking in all directions; 130 frames are used for training; (b) a normal frame; (c,d) abnormal frames; people are walking in the same direction. Two hundred and nineteen frames are used for normal and abnormal testing, respectively.
Figure 7. Detection results of the PETS scene: (a) training frames; people are walking in all directions; 130 frames are used for training; (b) a normal frame; (c,d) abnormal frames; people are walking in the same direction. Two hundred and nineteen frames are used for normal and abnormal testing, respectively.
Sensors 15 07156f7 1024
Figure 8. Detection results of the UMN scenes. Normal events are those where the individuals are walking; abnormal events are those where the individuals are running: (a,c,e) normal frames of lawn, indoor and plaza scene; (b,d,f) abnormal frames of lawn, indoor and plaza scene.
Figure 8. Detection results of the UMN scenes. Normal events are those where the individuals are walking; abnormal events are those where the individuals are running: (a,c,e) normal frames of lawn, indoor and plaza scene; (b,d,f) abnormal frames of lawn, indoor and plaza scene.
Sensors 15 07156f8 1024
Figure 9. The accuracy of the lawn scene under different classification methods and different features: (a) HOFO descriptors are computed on the original image and the foreground image; one-class SVM is taken as the classification method, the maximum accuracy is 100%; (b) HOFO descriptors are computed on the original image and the foreground image; KPCA is taken as the classification method; (c) original detection results, the maximum accuracy is 100%; (d) detection results that are post-processed by the state transition restriction strategy; the consecutive frame number threshold is N = 3. Four hundred and eighty normal frames are used for training; 100 frames are used for normal and abnormal testing, respectively.
Figure 9. The accuracy of the lawn scene under different classification methods and different features: (a) HOFO descriptors are computed on the original image and the foreground image; one-class SVM is taken as the classification method, the maximum accuracy is 100%; (b) HOFO descriptors are computed on the original image and the foreground image; KPCA is taken as the classification method; (c) original detection results, the maximum accuracy is 100%; (d) detection results that are post-processed by the state transition restriction strategy; the consecutive frame number threshold is N = 3. Four hundred and eighty normal frames are used for training; 100 frames are used for normal and abnormal testing, respectively.
Sensors 15 07156f9 1024
Figure 10. The accuracy of the indoor scene under different classification methods and different features: (a) HOFO descriptors are computed on the original image and the foreground image; one-class SVM is taken as the classification method, the maximum accuracy is 95.4%; (b) HOFO descriptors are computed on the original image and the foreground image; KPCA is taken as the classification method; (c) original detection results, the maximum accuracy is 94.6%; (d) detection results that are post-processed by the state transition restriction strategy; the consecutive frame number threshold is N = 3. 250 frames are used for training; 120 frames are used for normal and abnormal testing, respectively.
Figure 10. The accuracy of the indoor scene under different classification methods and different features: (a) HOFO descriptors are computed on the original image and the foreground image; one-class SVM is taken as the classification method, the maximum accuracy is 95.4%; (b) HOFO descriptors are computed on the original image and the foreground image; KPCA is taken as the classification method; (c) original detection results, the maximum accuracy is 94.6%; (d) detection results that are post-processed by the state transition restriction strategy; the consecutive frame number threshold is N = 3. 250 frames are used for training; 120 frames are used for normal and abnormal testing, respectively.
Sensors 15 07156f10 1024
Figure 11. The accuracy of the plaza scene under different classification methods and different features: (a) HOFO of the original image and the foreground image; one-class SVM is taken as the classification method, the maximum accuracy is 95.2%; (b) HOFO of the original image and the foreground image; KPCA is taken as the classification method, the maximum accuracy is 98.7%; (c) original detection results; (d) detection results that are post-processed by the state transition restriction strategy; the consecutive frame number threshold is N = 4. 250 normal frames are used for training; 114 frames are used for normal and abnormal testing, respectively.
Figure 11. The accuracy of the plaza scene under different classification methods and different features: (a) HOFO of the original image and the foreground image; one-class SVM is taken as the classification method, the maximum accuracy is 95.2%; (b) HOFO of the original image and the foreground image; KPCA is taken as the classification method, the maximum accuracy is 98.7%; (c) original detection results; (d) detection results that are post-processed by the state transition restriction strategy; the consecutive frame number threshold is N = 4. 250 normal frames are used for training; 114 frames are used for normal and abnormal testing, respectively.
Sensors 15 07156f11 1024
Table 1. The comparison of our proposed method with the state-of-the-art methods for global abnormal event detection in the UMN dataset. TPR, true positive rate; FPR, false positive rate. NN, nearest neighbor. SRC, sparse reconstruction cost. STCOG, spatial-temporal co-occurrence Gaussian mixture models.
Table 1. The comparison of our proposed method with the state-of-the-art methods for global abnormal event detection in the UMN dataset. TPR, true positive rate; FPR, false positive rate. NN, nearest neighbor. SRC, sparse reconstruction cost. STCOG, spatial-temporal co-occurrence Gaussian mixture models.
MethodArea under ROC

LawnIndoorPlaza
Social Force [22]0.96
Optical Flow [22]0.84
NN [23]0.93
SRC [23]0.9950.9750.964
STCOG [24]0.93620.77590.9661
HOFO SVM (Ours)0.98450.90370.9815
HOFO PCA (Ours)0.99920.98800.9989

Share and Cite

MDPI and ACS Style

Wang, T.; Snoussi, H. Detection of Abnormal Events via Optical Flow Feature Analysis. Sensors 2015, 15, 7156-7171. https://doi.org/10.3390/s150407156

AMA Style

Wang T, Snoussi H. Detection of Abnormal Events via Optical Flow Feature Analysis. Sensors. 2015; 15(4):7156-7171. https://doi.org/10.3390/s150407156

Chicago/Turabian Style

Wang, Tian, and Hichem Snoussi. 2015. "Detection of Abnormal Events via Optical Flow Feature Analysis" Sensors 15, no. 4: 7156-7171. https://doi.org/10.3390/s150407156

Article Metrics

Back to TopTop