Next Article in Journal
Measurement of Sedentary Behavior—The Outcomes of the Angle for Posture Estimation (APE) Method
Previous Article in Journal
Flare: An FPGA-Based Full Precision Low Power CNN Accelerator with Reconfigurable Structure
Previous Article in Special Issue
SmartLaundry: A Real-Time System for Public Laundry Allocation in Smart Cities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Knowledge Development Trajectories of Intelligent Video Surveillance Domain: An Academic Study Based on Citation and Main Path Analysis

1
Department of Industrial Engineering & Management, National Taipei University of Technology, Taipei 10608, Taiwan
2
Department of Transportation Science, National Taiwan Ocean University, No. 2, Beining Rd., Zhongzheng Dist., Keelung City 202301, Taiwan
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(7), 2240; https://doi.org/10.3390/s24072240
Submission received: 22 January 2024 / Revised: 7 March 2024 / Accepted: 29 March 2024 / Published: 31 March 2024
(This article belongs to the Special Issue Advanced IoT Systems in Smart Cities)

Abstract

:
Smart city is an area where the Internet of things is used effectively with sensors. The data used by smart city can be collected through the cameras, sensors etc. Intelligent video surveillance (IVS) systems integrate multiple networked cameras for automatic surveillance purposes. Such systems can analyze and monitor video data and perform automatic functions required by users. This study performed main path analysis (MPA) to explore the development trends of IVS research. First, relevant articles were retrieved from the Web of Science database. Next, MPA was performed to analyze development trends in relevant research, and g-index and h-index values were analyzed to identify influential journals. Cluster analysis was then performed to group similar articles, and Wordle was used to display the key words of each group in word clouds. These key words served as the basis for naming their corresponding groups. Data mining and statistical analysis yielded six major IVS research topics, namely video cameras, background modeling, closed-circuit television, multiple cameras, person reidentification, and privacy, security, and protection. These topics can boost the future innovation and development of IVS technology and contribute to smart transportation, smart city, and other applications. According to the study results, predictions were made regarding developments in IVS research to provide recommendations for future research.

1. Introduction

The COVID-19 pandemic accelerated the advancement of intelligent video surveillance (IVS) technologies with capabilities in mask detection, temperature measurement, and crowd analysis. These advancements align with the global trend toward the establishment of smart cities, in which video data plays a crucial role in enhancing urban governance efficiency. Additionally, growing concerns over safety and security have led to an increased demand for IVS technologies due to increasing crime rates. Accordingly, the global video surveillance market is expected to reach US$53.7 billion by 2023 and US$83.3 billion by 2028.
According to the latest report from Markets and Markets, the compound annual growth rate of the global video surveillance market from 2023 to 2028 is expected to reach 9.2%. Increasing urbanization and the trend toward smart cities have created a need for advanced IVS systems. Critical infrastructure, including transportation hubs and public utilities, require robust surveillance systems to ensure safety and operational efficiency. The IVS market is expected to reach US$83.3 billion by 2028 (Figure 1) [1]. Given the enormous business opportunities and growth potential in the video surveillance industry, this study examined key developments in the digital surveillance industry.
Web of Science database is a platform consisting of several literature search database designed to support scientific and scholarly research. Web of Science Core Collection is premier resource on the platform and includes more than 21,000 peer-reviewed, high-quality scholarly journals published. In total, 5111 articles about Video Surveillance were included for this paper. Many studies, have conducted video surveillance for a literature review on science or technology.
This study reviewed the literature on IVS technologies to examine how the field has developed over time. Articles were retrieved from the Web of Science (WOS) academic database and analyzed using main path analysis (MPA) to examine relevant developments, theories, and trends in the field of IVS. Furthermore, cluster analysis and text mining were performed. The objectives of this study were as follows:
  • MPA was performed to identify developmental trends in the field of IVS over time.
  • Cluster analysis was performed to group IVS articles and identify main topics.
  • Text mining was performed to identify relevant key words, and growth curves were used to predict growth in topics.

1.1. Identifying Core Academic Literature

1.1.1. Identifying Intelligent Video Surveillance

IVS systems are surveillance systems that involve the use of a large number of security cameras. These systems can integrate and analyze data from multiple cameras and perform surveillance-specific actions, such as generating warning notifications. IVS is interdisciplinary, combining electronics (sensing equipment), pattern recognition, computer vision, machine learning, and network technologies. IVS can be implemented in various settings, has a range of applications, and combines fifth-generation technologies, artificial intelligence, and the Internet of things. IVS is used in urban settings, agriculture, medicine, and transportation [2].
The seven common functions of IVS are as follows [3]:
  • Image and video analysis.
  • Construction cost reduction.
  • Alerts and notifications.
  • Cloud application.
  • Unlimited transmission distance.
  • Mobile surveillance.
  • Flexible expansion of the system.
IVS has applications in cities, agriculture, medicine, and transportation. Only a handful of literature reviews have assessed developments in IVS research. Furthermore, the analytical methods used in articles on IVS have limitations. In summary, the field of IVS has not been comprehensively analyzed.
Given the aforesaid considerations, this study reviewed the literature on IVS research by organizing articles on the basis of publication year and topic popularity. The articles were retrieved from the WOS database. Accordingly, the top 20 fields in IVS were identified to examine advancements in the field and provide recommendations for future research.

1.1.2. Literature on Main Path Analyses

Many studies have conducted main path analyses or key-route main path analyses for a literature review on science or technology. Fontana et al. [4]; Verspagen [5]; and Mina and Consoli [6] employed main path analyses to identify the trajectory of technology. Calero-Medina and Noyons [7]; Strozzi and Colicchia [8]; Harris et al. [9]; Chuang et al. [10]; Bekkers and Martinelli [11] and LucioArias and Leydesdorff [12] conducted a main path analysis to investigate changes in technology. Bhupatiraju et al. [13]; Yan et al. [14]; and Su et al. [15] performed main path analysis to review literature in various disciplines. Li [16] conducted a main path analysis to simplify a massive number of patent verdicts. Li [17] also performed a main path analysis to identify key verdicts and observe trends in patent rights abuse from 1916 to 2016.

2. Materials and Methods

2.1. Data Source

The study searched the WOS database using the following terms ⸢TS = (“Intelligent Monitor” OR “Intelligent Surveillance” OR “Security cameras” OR “Surveillance cameras” OR “Smart monitoring” OR “surveillance monitor” OR “Closed-circuit television” OR “video Surveillance”)⸥. This search yielded 6498 articles. Articles were excluded if they lacked information on author, topic, or year of publication. In total, 5111 articles were included for further analysis.

2.2. Main Path Analysis

MPA was proposed by Hummon and Dereian [18] to analyze developments in deoxyribonucleic acid theories. MPA involves analyzing citation networks and quantifying citations of books, articles, and patents. MPA calculates the weight of each connection from the origin (source of academic articles) to the destination (sink of academic articles), then uses the weight of each path to identify the main path. This study followed the suggestions of Liu and Lu [19] and adopted two methods, namely global MPA and key-route MPA. Liu and Lu provided empirical evidence demonstrating that the search path link count (SPLC) method is superior to the search path count and search path node pair methods. They also demonstrated that MPA can effectively uncover knowledge diffusion. The SPLC weight algorithm extracts a line from the network and calculates the number of possible paths from the origin, through the nodes, to the end of the line. The number of all possible paths from the end of the line to the sink is then calculated, and the two aforementioned numbers are multiplied to calculate the final weight of all the lines (Figure 2).

2.3. Basic Statistics Analysis of Journals and Authors

Regarding journal statistics, each journal’s name, publication dates, and g-index and h-index values were obtained. For author statistics, each author’s name, publication dates, and g-index and h-index values were obtained.
Leo Egghe (2006) [20] proposed the g-index metric, which indicates that after academic articles and research results have been arranged in decreasing order of the number of their citations, the top g number of articles have received at least g2 citations. Hirsch (2005) [21] proposed the h-index metric, which indicates that an author has h number of articles and each article has been cited at least h times.
This study used the g-index for analysis and complemented the analysis with the h-index to assess the influence of journals in an academic field and the contribution made by authors. Therefore, this study listed the top 20 influential journals for IVS and the top 20 influential authors for IVS.

2.4. Growth Curve Analysis

Articles were retrieved from the WOS database. Data were analyzed and expected growth curves were drawn using Loglet Lab. The y-axis was the cumulative number of IVS articles, and the x-axis was the year. The final growth curve could predict the growth stage and maturity stage of the field of IVS.

2.5. Cluster Analysis

Cluster analysis was used to group similar articles, and key words were used to name each group. The Girvan–Newman algorithm was used to perform cluster analysis (Girvan & Newman, 2002) [22]. Its steps are as follows:
  • Calculate the betweenness in the network. Select two random nodes. The total number of shortest paths that pass through the two nodes is the number of edges between the two nodes.
  • Eliminate the path that has the largest betweenness.
  • Calculate the modularity of separated clusters. If no new clusters are separated, repeat steps 1 and 2 until all the paths have been eliminated. The modularity compares the strength of the associations between nodes within clusters and between nodes inside and outside clusters.
  • Select the grouping with the largest modularity. This is the optimal grouping of the cluster analysis.

2.6. Word Clouds

After cluster analysis, the titles and abstracts of articles in each group were analyzed, the frequency of each key word was calculated, and the results were presented in word clouds. Prepositions and articles were excluded from calculations. Key word frequencies were ranked, and groups were named using key words.

3. Results

3.1. Data Statistics

Papers related to IVS technology research were collected from the Web database. The keyword “Surveillance” was used to select papers to ensure the collected papers were relevant. The references, publication years, authors, and page numbers of 5284 papers were obtained. erroneous data, such as garbled text, blanks, and anonymous authors, were combined, 5111 papers remained. Because the Web of Science database can present the citation relationship between papers, this database is suitable the development trends of scientific territories.
The cumulative number of articles published per year is shown in Figure 3. The period spans 1991 to 2022. The blue and orange bars represent the number of articles published each year and the cumulative number of IVS articles, respectively. Regarding statistics, data on the publication period, journal name, and journal g-index were compiled. Regarding researcher data, statistics on the author, publication and g-index and h-index were compiled. Of these indices used significance of a journal or author to the IVS territories. the g-index was employed indicator and the h-index was used as the secondary indicator for separately the top 20 influential journals and authors in the IVS territories. The number of articles published each year increased slowly starting from 1995, and in 2011, the annual increment of the number of published articles increased considerably. From 2012 to 2022, the number of articles published per year exceeded 100.The number of articles published per year was highest in 2022, suggesting that the field of IVS is developing and receiving more attention.

3.2. Journal Statistics

The top journal was IEEE Transactions on Circuits and Systems for Video Technology, which published 116 articles (Table 1) on video collection, display, processing, filtering, conversion, synthesis, compression, transfer, communication, network, storage, retrieval, search, and hardware and software design and implementation. The high g-index value of this journal demonstrates its importance in the field of IVS. The journal with the second-highest ranking was IEEE Transactions on Image Processing, which published 60 articles explaining new theories, algorithms, and architectures related to the formation, capture, processing, communication, analysis, and display of videos and multidimensional signals. The journals that ranked third, fourth, and fifth were Pattern Recognition Letters, Pattern Recognition, and Computer Vision and Image Understanding, respectively. These journals published articles on image information and computer vision.

3.3. Academic Literature and the Overall Development Trajectory of IVS

Global MPA results are shown in Figure 4. The green and blue nodes represent the sources and sinks of articles, respectively. Each node represents an article, and directional arrows, which represent the flow of knowledge, connect the nodes. Each node has a code beside it. The code includes the name of the first author, the initials of other authors, and the year of publication. Lowercase letters were added to distinguish repeated codes.
The main path of the field of IVS is shown in Figure 4. The main path of the citation network had the highest weight and consisted of 18 nodes. Because each node represented an article, this study briefly introduced the 18 articles on the global main path.
Lee al. (2000) explored the automatic construction of a comprehensive and independent image framework. They used the videos of multiple cameras to analyze different types of videos [23]. Collins et al. (2001) integrated multicamera surveillance systems with the objective of automatically collecting and distributing real-time information to improve situational awareness among decision-makers [24]. Mittal et al. (2003) used multiview video systems to perform video segmentation tasks and to detect and track individuals [25].
Calderara (2008) and Simone et al. (2009) proposed methods for solving the problem of overlapping fields of view in multicamera systems. They proposed a complete video system that performed image segmentation and believed that video surveillance was a crucial component of intelligent transportation systems [26,27]. Marco et al. (2012) used video analysis technology in various applications, including detection of abnormal events and in surveillance systems [28]. Mehrsan et al. (2013) proposed a method that uses videos as training samples to effectively detect suspicious events in videos [29]. Nannan et al. (2015) proposed a novel anomaly detection method that uses video surveillance. They used Gaussian process regression to identify abnormal events, investigated the effects of occlusion, and used supplemental information from previous frames to perform anomaly detection [30]. Yachuang et al. (2017) believed that the detection of abnormal events by IVS is essential, particularly for crowded settings [31]. According to Ben et al. (2018), the two main components of video surveillance systems are behavior representation and modeling. They used feature extraction and relevant technologies to describe behavior representation and provided classification methods and frameworks for behavior modeling [32].
According to Ullah et al. (2019), surveillance cameras enable the collection of large amounts of data [33]. According to Mahmoodi et al. (2019), IVS can be used for violence detection. As the demand for video surveillance systems that can automatically detect violence increases, current violence detection methods should be researched and improved [34]. Mohammad et al. (2021) and Waseem et al. (2021) believed that automatic anomaly detection is crucial when video surveillance monitors the environment [35,36]. Patrikar et al. (2022) investigated IVS-based anomaly detection methods [37]. According to Amnah et al. (2022), the increasing prevalence of CCTV has accentuated the importance of detecting anomalies in videos of crowds through IVS. Such detection tasks are challenging because personnel are required to dedicate considerable time and continuous attention to effectively identify abnormalities in the large amount of videos captured by CCTV systems [38]. According to Ekanayake et al. (2023), crowd density and anomaly detection are popular topics of video surveillance, particularly people-oriented actions and activity-based motions. These topics focus on attention-oriented classification systems based on deep learning to recognize basic activities in public venues [39].

4. Discussion

4.1. Development Trajectory of Intelligent Video Surveillance

This study performed MPA to identify the main path with the highest weight. Next, key-route MPA was performed to identify the following three periods. Global MPA was performed to find the main path with the highest weight, and key-route MPA was performed to observe interactions and associations between paths. The three types of MPA revealed the development of the field of IVS.
  • Visual surveillance analysis (2000–2003): development strategy of IVS in visual surveillance. Articles during this period focused on key elements related to the development of IVS systems, such as motion tracking, camera coordination, activity classification, and event detection.
  • Anomaly detection (2008–2018): analysis of intelligent anomaly detection. Articles during this period focused on anomaly detection methods and proposed video analysis techniques to automatically analyze videos and immediately alert users of abnormal activities. Anomaly detection can supervise other surveillance tasks. The articles also proposed new methods for anomaly detection by video surveillance.
  • Detection of abnormal (2019–2023): effects of abnormalities detected from intelligent surveillance. Articles during this period explored the effects of abnormal intelligent surveillance. They believed that the detection of abnormal intelligent surveillance and the protection of personal privacy are both essential and that system operations and personal privacy can be maintained.
This study drew the key-route main path to ensure that influential articles were not omitted. Figure 5 demonstrates the relationship between multiple paths and reveals the development of IVS articles at different times.
In Figure 5, the yellow box reveals that the IVS articles published from 2000 to 2008 were related to visual surveillance analysis. In general, these articles discussed the automatic detection and tracking of multiple people in high-density settings.
The red box demonstrates that the IVS articles published from 2010 to 2018 were related to anomaly detection by visual surveillance. These articles explained that as the number of indoor and outdoor cameras increased, the demand for the detection of abnormalities among moving objects in videos increased as well.
The blue box displays the IVS articles published from 2019 to 2023. These articles were mainly related to intelligent anomaly detection by IVS systems. To automatically detect abnormalities, improve problems related to artificial methods, and increase effectiveness, these articles proposed identification frameworks that incorporated convolutional neural networks (CNNs) to accurately detect abnormalities in videos. In general, this path focused more on deep learning.
The present study observed the development of IVS articles and discovered that most of them were related to video surveillance analysis, anomaly detection by video surveillance, and intelligent anomaly detection. To understand other fields, this study grouped similar articles in other fields, analyzed the titles of articles in each group, and created word clouds. The groups were named by using the study that was cited the most in each group. This study reviewed the titles of the articles to form word clouds. After cluster analysis, six groups were created, some of which were related to applications of IVS systems (Table 2). To further explore each group, the articles in each group were analyzed, SPLC weights were calculated, and main paths were drawn to ensure that each group included influential IVS articles.

4.2. Cluster Analysis of IVS

An edge-betweenness cluster analysis yielded 20 clusters, and studies in the top 6 clusters, namely the effects of IVS. Table 2. presents the themes, number of studies, keywords, and word clouds for these 6 clusters. Keywords were ranked by their frequency (numbers in parentheses) in titles; for example, “detection” appeared an average 0.01 times in the first cluster. The studies in each cluster were analyzed to determine the main path of each cluster and the research direction of each main path. The literature growth trend charts revealed that the number of studies in the literature increased in all six clusters.
The aforementioned MPA divided the field of IVS into several groups to gain insights into the key topics. A total of 6 groups were identified. The top six groups were named using the key words collected by Wordle, and they were IVS in video camera, IVS in background modeling, IVS in closed-circuit television CCTV, IVS in multiple cameras, IVS in person reidentification, and IVS in privacy, security, and protection. The articles within each group were analyzed to obtain their main paths and to investigate the development of IVS research in each group.

4.2.1. IVS in Video Cameras

The first group included 573 articles related to video cameras (Figure 6). The main path of this group extended from 2000 to 2023 and included 16 articles that discussed the effects of video cameras on IVS.
Ivanov et al. (2000) discussed identification problems from two perspectives and mentioned temporal extension and interaction activities for the detection and identification of multiple videos [40]. Piciarell et al. (2008) proposed the detection of abnormal events that differed from the norm. They used trajectory analysis for anomaly detection, particularly for video and traffic surveillance [41]. Jiang et al. (2011) suggested vertical screen perception for anomaly detection. They tracked all moving objects in videos and considered the spatiotemporal context at three levels, namely the point anomaly of video objects, the sequential anomaly of object trajectories, and the co-occurrence anomaly of multiple video objects [28].
Bertini et al. (2012) explored an anomaly detection and positioning method applied in video surveillance to collect statistics in a dynamic setting and external spatiotemporal features [42]. Li et al. (2015) proposed an anomaly detection method for video surveillance of crowded settings. Their method was called an automated statistical learning framework and was based on the analysis of the layout of volumes of three-dimensional objects in spatiotemporal videos. The method could effectively detect abnormalities and precisely locate abnormal regions [43].
According to Feng et al. (2017), detection of abnormal events in video surveillance is crucial, particularly for complex settings. They used a deep learning network for image classification (PCANet) and extracted appearance and motion features from three-dimensional gradients to model the events. They constructed a Gaussian mixture model (GMM) from normal events that they observed. A deep GMM is an expandable deep generative model, and Feng et al. stacked multiple GMMs together so that their method could use relatively few parameters to achieve competitive performance [31]. Ben Mabrouk et al. (2018) investigated the two main components of video surveillance systems, namely behavior representation and behavior modeling. They reviewed the feature extraction of behavior representation and described relevant techniques, and provided classification methods and frameworks for behavior modeling [32]. Waseem et al. (2021) proposed a high-efficiency intelligent anomaly detection framework based on deep features. The framework extracted features from frames and was valuable for capturing abnormalities [36].
Mohammad et al. (2021) introduced and analyzed methods for video anomaly detection and the reliability of such methods. They proved the high sensitivity of anomaly detection in a variety of circumstances [35]. Patrikar et al. (2022) developed various methods for anomaly detection in IVS. Anomaly detection is considered a key temporal application of computer vision. Edge devices and specialized methods are used for automated anomaly detection [37]. The use of CCTV has become more common in smart cities. According to Amnah et al. (2022) and Ekanayake et al. (2023), for crowd anomaly detection, IVS is essential. Articles related to the detection of human behavior include methods that detect abnormal crowd behaviors [38,39].
Figure 6 demonstrates that from 2000 to 2015, articles mainly designed methods for anomaly detection in CCTV footage. From 2017 to 2019, they switched to the development of anomaly detection by intelligent surveillance. Since 2019, they have explored methods to establish automated anomaly detection in intelligent surveillance.

4.2.2. IVS in Background Modeling

The second group included 436 articles related to background modeling (Figure 7). The main path extended from 2000 to 2022 and included 10 articles that discussed the effects of background modeling on IVS.
Stauffer et al. (2000) and Maddalena et al. (2008) explored the detection and tracking of moving objects by artificial intelligence. The key elements of their method were motion tracking, camera coordination, activity classification, and event detection. They focused on motion tracking and demonstrated how to use the motions observed to learn activities at various learning points [44,45]. Li et al. (2004) proposed a novel algorithm for detecting foreground objects in a complex environment. The algorithm consisted of change detection, change classification, foreground segmentation, and backend maintenance and was used to arrange the order of interesting images in various environments such as offices and public buildings [46].
According to Guo et al. (2013), the detection of moving objects is a fundamental step of IVS. They proposed a solution that provided highly precise and effective processing that satisfied the need for real-time detection of moving objects [47]. According to Yang et al. (2013 and 2018), background information processing, such as object detection and scene understanding, is crucial for video surveillance. They proposed a pixel-to-model method for background modeling and for restoring monitored settings [48,49]. According to Akilan et al. (2020), foreground and background segmentation in videos is useful in intelligent transportation and video surveillance. Current algorithms are mostly based on conventional computer vision techniques, but the newest solution uses deep learning models that focus on image classification [50].
In their study, Shahbaz et al. (2021) highlighted the security risks associated with unauthorized access to restricted areas. To address this issue, they suggested integrating IVS with a sterile zone monitoring algorithm. However, implementing such an algorithm comes with its own set of challenges, including double cameras (color and infrared), dynamic background, lighting variations, camouflage, and static foreground objects. To address these challenges, Shahbaz et al. proposed an improved change detector algorithm [51]. Putro et al. (2022) proposed a high-efficiency face detection algorithm that uses lighting to precisely locate faces [52]. According to Rahmaniar et al. (2022), head posture estimation is used in several IVS systems, such as human behavior analysis, intelligent driver assistance, and visual warning and monitoring systems. These systems require precise alignment and prediction of head movements. Rahmaniar et al. proposed a method to estimate head postures using facial conditions, such as occlusion or challenging viewpoints [53].
Figure 7 reveals that from 2000 to 2016, articles focused on real-time object detection. Since 2018, articles have gradually changed their attention to the application of CNN in IVS.

4.2.3. IVS in Person Reidentification (PReID)

The third group included 207 articles related to PReID (Figure 8). The main path extended from 2012 to 2023 and included 11 articles that discussed the effects of PReID in IVS.
Satta et al. (2012), Tao et al. (2013 and 2015), and An et al. (2015) discussed PReID, focusing on matching people at different times and locations. The computer vision of PReID includes the identification of individuals who have previously passed through the surveillance camera network [54,55,56,57]. Liu et al. (2017) proposed a novel model based on soft attention called the end-to-end comparative attention network, which was specifically designed for PReID tasks [58]. Liu et al. (2018) stated that the PReID in videos is a core function of security and video surveillance. They proposed a new accumulative motion context network for this crucial issue [59].
Zeng et al. (2018) mentioned that PReID is a new task of IVS and is closely associated with many actual applications [60]. Almasawa et al. (2019), Kang et al. (2021), Liu et al. (2022), and Uddin et al. (2023) argued that PReID plays a crucial role in IVS and has diverse applications in public safety. They proposed using deep learning to improve PReID systems, and their articles are essential for different applications of computer vision [61,62,63,64].
Figure 8 reveals that from 2012 to 2015, articles mainly discussed the development of PReID in IVS. Since 2017, articles have gradually switched to the application of deep learning in PReID.

4.2.4. IVS in Closed-Circuit Television

The fourth group included 177 articles related to CCTV (Figure 9). The main path extended from 2003 to 2022 and included 11 articles that discussed the effects of CCTV in IVS.
Welsh, BC et al. (2003) systematically reviewed articles investigating the effects of CCTV on crime at public venues. They performed a targeted and comprehensive search on published and unpublished articles [65]. Welsh et al. (2009) and Caplan et al. (2011) performed the latest systematic review and meta-analysis on the effects of CCTV on crime at public venues [66,67]. Piza et al. (2014 and 2015) explored whether environmental features changed in accordance with the type of crime. They discovered that the effect of the environment on crime rates differed by the type of crime. For example, CCTV is associated with less crime, less violent crime, and less motor vehicle theft, and stationary objects are associated with the increase of CCTV occlusion and motor vehicle theft and the decrease of violent crime and robbery [68,69].
Lim et al. (2017) and Piza et al. (2019) discovered limited evidence supporting the effectiveness of CCTV in reducing crime. Furthermore, they observed that the effectiveness was influenced by the underlying crime rate [70,71]. Idrees et al. (2018) introduced and discussed computer vision from the perspective of law enforcement. Their research is valuable for law enforcement personnel who monitor large camera networks and who are responsible for upgrading computer vision systems [72]. Chen et al. (2021) observed that a drastic increase in the number of surveillance cameras did not provide a crime deterrent effect nor evidence for investigations [73]. Thomas et al. (2022) mentioned the global expansion of CCTV programs and used systematic review methods and meta-analytic techniques to investigate the effects of CCTV programs on crime rates in different countries [74].
Figure 9 reveals that from 2003 to 2019, articles mainly discussed the need to install CCTV in IVS. Since 2019, articles have begun to change their focus on the functionality of IVS.

4.2.5. IVS in Privacy, Security, and Protection

The fifth group included 170 articles related to privacy, security, and protection (Figure 10). The main path extended from 2005 to 2023 and included 9 articles related to the effects of privacy, security, and protection in IVS.
Newton et al. (2005) introduced an algorithm that protected the privacy of video surveillance data. The algorithm de-identifies faces to retain facial features without reliable identification of individuals [75]. According to Agrawal et al. (2011), the improvement of cameras and network technologies has facilitated the capture of large amounts of video data and extensive video sharing. However, automated methods are required to de-identify individuals in videos [76].
Ramon et al. (2015) explored methods to protect individual privacy in image data. Their main contribution was proposing visual privacy protection methods [77]. Ribaric et al. (2016), Ciftci et al. (2018), and Asghar et al. (2019) discussed the concept of privacy and the relationship between privacy and data protection. They also investigated privacy protection designs and techniques for multimedia data and used a technological perspective to understand visual privacy protection [78,79,80].
Shifa et al. (2020) stated that video surveillance is often used for real-time anomaly detection and automated video analysis. The videos captured by real-time surveillance cameras often include identifiable personal information, which could include the location of surveillance and other sensitive data, and must be protected [81]. Hosny et al. (2022) proposed a new method to protect the privacy of individuals in surveillance videos. Their simulation results and safety analysis confirmed the effectiveness of their method for protecting the privacy of individuals in surveillance videos [82]. Liu et al. (2023) argued that rapid technological development increased the number of video surveillance equipment in family settings, and the importance of video privacy protection facilitated the development of different video privacy protection methods [83].
Figure 10 reveals that from 2005 to 2009, articles mainly explored feature identification and privacy protection in IVS. In 2020, articles started to discuss privacy protection in video surveillance.

4.2.6. IVS in Multiple Cameras

The sixth group included 147 articles related to multiple cameras (Figure 9). The main path extended from 2000 to 2022 and included 10 articles related to the effects of multiple cameras in IVS.
Lee et al. (2000) discussed the automatic construction of a comprehensive image-independent framework that used the videos of multiple cameras to model activities on a large scale [23]. Collins et al. (2001) and Snidaro et al. (2003) aimed to automatically collect videos and distribute real-time information by using visual-based surveillance systems and autonomous vehicles to improve situational awareness among security providers and decision-makers [24,84].
Wang, Xiaogang (2013) and Kenk et al. (2015) reviewed the latest developments of relevant technologies from the perspective of computer vision and model identification. They proposed an integrated solution for the reidentification problem of distributed intelligent cameras [85,86]. Iguernai et al. (2019) introduced the multicamera tracking of objects and summarized and classified some common methods [87].
Olagoke et al. (2020) evaluated articles that investigated the physical layout, calibration, algorithm, advantages, and disadvantages of multicamera systems [88]. Liu et al. (2021) explored the storage, application, and development of wireless video surveillance systems [89]. Yu et al. (2021 and 2022) suggested using cameras connected to the Internet to monitor individuals, families, and environments. They also proposed solutions for improving privacy and storage [90,91].
Figure 11 reveals that articles discussed the calibration of multiple cameras from 2000 to 2013, discussed camera identification from 2015 to 2020, and began exploring the storage of video data in 2021.

4.2.7. Emerging Areas and Potential Opportunities in Other Applications

This study collected more than 77 IVS articles and discovered three additional groups (Table 3). The three groups are arranged by the number of articles published and represent the application of IVS in action recognition, face recognition, and cloud computing. Because these groups contain few articles, not all journals appeared on the main paths of these groups. In addition, journals that performed literature reviews for these articles focused solely on the development of specific fields and made significant contributions to those fields. However, this study only considered influential journals and fields with large numbers of published articles (Table 3). Journals that performed literature review focused on the development of specific types of technology, such as improving recognition and analysis of different actions in videos and enhancing big data analysis. They also developed automatic detection, classification, and analysis of objects in videos to create more precise surveillance and analysis. Therefore, if these journals continue their research, they could improve the performance of IVS in the future.

4.3. Analysis of Growth Curve of IVS

Loglet [92] analysis involves the decomposition of growth and diffusion patterns into S-shaped logistic components. The decomposition is roughly analogous to wavelet analysis, popular for signal processing and compression. In the easiest cases, a loglet appears as a single S-shaped curve. This study adopted a Logistic growth model. Loglet Lab was used to depict the growth curve of IVS and predict its maturity stage, growth stage, peak, and turning point. The dotted line in Figure 12 represents the expected total cumulative number of published articles. The solid line and the circles represent the actual total cumulative number of published articles. The results demonstrated that 2020 was the turning point of the growth curve. The curve is expected to reach the mature stage by 2035, at which time the maximum cumulative number of published articles is expected to exceed 6000. The results also indicate that the field of IVS is still in its growth stage and is 15 years away from its maturity stage.

5. Conclusions

This study used cluster analysis and text mining to identify the top six groups of 5111 articles and analyze fields related to IVS. The six main groups were IVS in video cameras, IVS in background modeling, IVS in PReID, IVS in CCTV, IVS in privacy, security, and protection, and IVS in multiple cameras. The conclusion regarding the future development focus of the six topic groups are described as follows:
  • IVS in video cameras: The detection of abnormal events in intelligent surveillance is crucial. The accurate determination of abnormal events in complex settings is particularly important.
  • IVS in background modeling: The application of CNN in IVS.
  • IVS in PReID: The usage of deep learning to improve accuracy and efficiency in PReID.
  • IVS in CCTV: The intellectualization of CCTV.
  • IVS in privacy, security, and protection: The protection of personal privacy in surveillance systems.
  • IVS in multiple cameras: The storage of data from multiple cameras.
The global MPA, key-route MPA, and cluster analysis of the six groups demonstrated that although the groups were different, they were still associated. The “visual surveillance analysis”, “anomaly detection”, and “detection of abnormal intelligent surveillance” of the global main path and key-route main path corresponded with the “IVS in video cameras”, “IVS in background modeling”, and “IVS in PReID” groups, indicating that these fields had a certain degree of influence on IVS. Furthermore, the groups demonstrated special applications, particularly among IVS in privacy, security, and protection. The importance of video privacy and protection facilitated the development of video privacy and protection methods. On the basis of the aforementioned analysis, this study suggests that further research into privacy, security, and protection is warranted.
This study comprehensively explored developments in the field of IVS and elaborated on key implications.

Author Contributions

Conceptualization and methodology, F.-L.H. and K.-Y.C.; data curation, F.-L.H. and W.-H.S.; writing—original draft preparation, K.-Y.C.; writing—review and editing, W.-H.S.; supervision, K.-Y.C.; project administration, K.-Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors would like to thank those who participated in this study and all those who helped distribute and spread awareness of our study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Video Surveillance Market Forecast to 2028. Available online: https://www.marketsandmarkets.com/Market-Reports/video-surveillance-market-645.html?gclid=CjwKCAjw3dCnBhBCEiwAVvLcu-JUEbrC3L1XDChSV24lJ4sf4Gvc8nYSPP8tsOEzFj5bpP-AUQHXnRoCQEUQAvD_BwE (accessed on 4 September 2023).
  2. TechOrange. [5G New Economy: Information Security] Why are Qualcomm and Microsoft Stepping Up Their Efforts to Invest in IoT Startups? Available online: https://buzzorange.com/techorange/2020/11/06/5g-cybersecurity/ (accessed on 1 December 2023).
  3. Chen, W.-C.; Chen, H.-C. The Study of Competitive Strategy in IP Surveillance Industry—Case Study on V Company. 2013. Available online: https://www.airitilibrary.com/Common/Click_DOI?DOI=10.6342%2fNTU.2013.10573 (accessed on 1 December 2023).
  4. Fontana, R.; Nuvolari, A.; Verspagen, B. Mapping technological trajectories as patent citation networks: An application to data communication standards. Econ. Innov. New Technol. 2009, 18, 311–336. [Google Scholar] [CrossRef]
  5. Verspagen, B. Mapping technological trajectories as patent citation networks: A study on the history of fuel cell research. Adv. Complex Syst. World Sci. 2007, 10, 93–115. [Google Scholar] [CrossRef]
  6. Consoli, D.; Mina, A. An evolutionary perspective on health innovation systems. J. Evol. Econ. 2009, 19, 297–319. [Google Scholar] [CrossRef]
  7. Calero-Medina, C.; Noyons, E.C.M. Combining mapping and citation network analysis for a better understanding of the scientific development: The case of the absorptive capacity field. J. Inform. 2008, 2, 272–279. [Google Scholar] [CrossRef]
  8. Colicchia, C.; Strozzi, F. Supply chain risk management: A new methodology for a systematic literature review. Supply Chain. Manag. 2012, 17, 403–418. [Google Scholar] [CrossRef]
  9. Harris, J.K.; Beatty, K.E.; Lecy, J.D.; Cyr, J.M.; Shapiro, R.M. Mapping the multidisciplinary field of public health services and systems research. Am. J. Prev. Med. 2011, 41, 105–111. [Google Scholar] [CrossRef]
  10. Chuang, T.C.; Liu, J.S.; Lu, L.Y.Y.; Tseng, F.M.; Lee, Y.; Chang, C.T. The main paths of eTourism: Trends of managing tourism through Internet. Asia Pac. J. Tour. Res. 2017, 22, 213–231. [Google Scholar] [CrossRef]
  11. Bekkers, R.; Martinelli, A. Knowledge positions in high-tech markets: Trajectories, standards, strategies and true innovators. Technol. Forecast. Soc. Chang. 2012, 79, 1192–1216. [Google Scholar] [CrossRef]
  12. Lucio-Arias, D.; Leydesdorff, L. Main-path analysis and path-dependent transitions in HistCite™-based historiograms. J. Assoc. Inform. Sci. Technol. 2008, 59, 1948–1962. [Google Scholar] [CrossRef]
  13. Bhupatiraju, S.; Nomaler, O.; Triulzi, G.; Verspagen, B. Knowledge flows—Analyzing the core literature of innovation, entrepreneurship and science and technology studies. Res. Policy. 2012, 41, 1205–1218. [Google Scholar] [CrossRef]
  14. Yan, J.; Tseng, F.M.; Lu, L.Y.Y. Developmental trajectories of new energy vehicle research in economic management: Main path analysis. Technol. Forecast. Soc. Change. 2018, 137, 168–181. [Google Scholar] [CrossRef]
  15. Su, W.H.; Chen, K.Y.; Lu, L.Y.; Wang, J.J. Identification of technology diffusion by citation and main paths analysis: The possibility of measuring open innovation. Open Innov. Technol. Mark. Complex 2021, 7, 104. [Google Scholar] [CrossRef]
  16. Lee, M.C. A Study of the Critical Cited Decision in CAFC by Using Main Path Analysis; National Yunlin University of Science and Technology: Douliu City, Taiwan, 2012. [Google Scholar]
  17. Lee, J. A Survey of the Development Track and Trend of Patent Abuse Theory: A Viewpoint of Main Path Analysis; National Taiwan University of Science and Technology: Taipei, Taiwan, 2016. [Google Scholar]
  18. Hummon, N.P.; Dereian, P. Connectivity in a Citation Network: The Development of DNA Theory. Soc. Netw. 1989, 11, 39–63. [Google Scholar] [CrossRef]
  19. Liu, J.S.; Lu, L.Y. An integrated approach for main path analysis: Development of the Hirsch index as an example. J. Am. Soc. Inf. Sci. Technol. 2012, 63, 528–542. [Google Scholar] [CrossRef]
  20. Egghe, L. Theory and practise of the g-index. Scientometrics 2006, 69, 131–152. [Google Scholar] [CrossRef]
  21. Hirsch, J.E. An Index to Quantify an Individual’s Scientific Research Output. Proc. Natl. Acad. Sci. USA 2005, 102, 16569–16572. [Google Scholar] [CrossRef]
  22. Girvan, M.; Newman, M.E.J. Community structure in social and biological networks. Proc. Natl. Acad. Sci. USA 2002, 99, 7821–7826. [Google Scholar] [CrossRef] [PubMed]
  23. Lee, L.; Romano, R.; Stein, G. Monitoring activities from multiple video streams: Establishing a common coordinate frame. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 758–767. [Google Scholar] [CrossRef]
  24. Collins, R.T.; Lipton, A.J.; Fujiyoshi, H.; Kanade, T. Algorithms for Cooperative Multisensor Surveillance. Proc. IEEE 2001, 89, 1456–1477. [Google Scholar] [CrossRef]
  25. Mittal, A.; Davis, L.S. M(2)Tracker: A multi-view approach to segmenting and tracking people in a cluttered scene. Int. J. Comput. Vis. 2003, 51, 189–203. [Google Scholar] [CrossRef]
  26. Calderara, S.; Prati, A.; Cucchiara, R. HECOL: Homography and epipolar-based consistent labeling for outdoor park surveillance. Comput. Vis. Image Underst. 2008, 111, 21–42. [Google Scholar] [CrossRef]
  27. Simone, C.; Rita, C.; Roberto, V.; Andrea, P. Statistical Pattern Recognition for Multi-Camera Detection, Tracking, and Trajectory Analysis. In Multi-Camera Networks; Academic Press: Cambridge, MA, USA, 2009; pp. 389–413. [Google Scholar] [CrossRef]
  28. Bertini, M.; Del Bimbo, A.; Seidenari, L. Multi-scale and real-time non-parametric approach for anomaly detection and localization. Comput. Vis. Image Underst. 2011, 116, 320–329. [Google Scholar] [CrossRef]
  29. Roshtkhari, M.J.; Levine, M.D. An on-line, real-time learning method for detecting anomalies in videos using spatio-temporal compositions. Comput. Vis. Image Underst. 2013, 117, 1436–1452. [Google Scholar] [CrossRef]
  30. Li, N.; Wu, X.; Guo, H.; Xu, D.; Ou, Y.; Chen, Y.-L. Anomaly Detection in Video Surveillance via Gaussian. Int. J. Pattern Recognit. Artif. Intell. 2015, 29, 1555011. [Google Scholar] [CrossRef]
  31. Feng, Y.; Yuan, Y.; Lu, X. Learning deep event models for crowd anomaly detection. Neurocomputing 2016, 219, 548–556. [Google Scholar] [CrossRef]
  32. Ben Mabrouk, A.; Zagrouba, E. Abnormal behavior recognition for intelligent video surveillance systems: A review. Expert Syst. Appl. 2018, 91, 480–491. [Google Scholar] [CrossRef]
  33. Ullah, F.U.M.; Ullah, A.; Muhammad, K.; Haq, I.U.; Baik, S.W. Violence Detection Using Spatiotemporal Features with 3D Convolutional Neural Network. Sensors 2019, 19, 2472. [Google Scholar] [CrossRef]
  34. Mahmoodi, J.; Salajeghe, A. A classification method based on optical flow for violence detection. Expert Syst. Appl. 2019, 127, 121–127. [Google Scholar] [CrossRef]
  35. Sarker, M.I.; Losada-Gutiérrez, C.; Marrón-Romera, M.; Fuentes-Jiménez, D.; Luengo-Sánchez, S. Semi-Supervised Anomaly Detection in Video-Surveillance Scenes in the Wild. Sensors 2021, 21, 3993. [Google Scholar] [CrossRef]
  36. Ullah, W.; Ullah, A.; Haq, I.U.; Muhammad, K.; Sajjad, M.; Baik, S.W. CNN features with bi-directional LSTM for real-time anomaly detection in surveillance networks. Multimedia Tools Appl. 2020, 80, 16979–16995. [Google Scholar] [CrossRef]
  37. Patrikar, D.R.; Parate, M.R. Anomaly detection using edge computing in video surveillance system: Review. arXiv 2022, arXiv:2107.02778. [Google Scholar] [CrossRef]
  38. Aldayri, A.; Albattah, W. Taxonomy of Anomaly Detection Techniques in Crowd Scenes. Sensors. 2022, 22, 6080. [Google Scholar] [CrossRef]
  39. Ekanayake, E.M.C.L.; Lei, Y.; Li, C. Crowd Density Level Estimation and Anomaly Detection Using Multicolumn Multistage Bilinear Convolution Attention Network (MCMS-BCNN-Attention). Appl. Sci. 2022, 13, 248. [Google Scholar] [CrossRef]
  40. Ivanov, Y.; Bobick, A. Recognition of visual activities and interactions by stochastic parsing. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 852–872. [Google Scholar] [CrossRef]
  41. Piciarelli, C.; Micheloni, C.; Foresti, G.L. Trajectory-Based Anomalous Event Detection. IEEE Trans. Circuits Syst. Video Technol. 2008, 18, 1544–1554. [Google Scholar] [CrossRef]
  42. Jiang, F.; Yuan, J.; Tsaftaris, S.A.; Katsaggelos, A.K. Anomalous video event detection using spatiotemporal context. Comput. Vis. Image Underst. 2011, 115, 323–333. [Google Scholar] [CrossRef]
  43. Li, N.; Wu, X.; Xu, D.; Guo, H.; Feng, W. Spatio-temporal context analysis within video volumes for anomalous-event detection and localization. Neurocomputing 2015, 155, 309–319. [Google Scholar] [CrossRef]
  44. Stauffer, C.; Grimson, W. Learning patterns of activity using real-time tracking. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 747–757. [Google Scholar] [CrossRef]
  45. Maddalena, L.; Petrosino, A.; Ferone, A. Object motion detection and tracking by an artificial intelligence approach. Int. J. Pattern Recognit. Artif. Intell. 2008, 22, 915–928. [Google Scholar] [CrossRef]
  46. Li, L.; Huang, W.; Gu, I.Y.-H.; Tian, Q. Statistical modeling of complex backgrounds for foreground object detection. IEEE Trans. Image Process. 2004, 13, 1459–1472. [Google Scholar] [CrossRef]
  47. Guo, J.-M.; Hsia, C.-H.; Liu, Y.-F.; Shih, M.-H.; Chang, C.-H.; Wu, J.-Y. Fast Background Subtraction Based on a Multilayer Codebook Model for Moving Object Detection. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 1809–1821. [Google Scholar] [CrossRef]
  48. Yang, L.; Cheng, H.; Su, J.; Li, X. Pixel-to-Model Distance for Robust Background Reconstruction. IEEE Trans. Circuits Syst. Video Technol. 2015, 26, 903–916. [Google Scholar] [CrossRef]
  49. Yang, L.; Li, J.; Luo, Y.; Zhao, Y.; Cheng, H.; Li, J. Deep Background Modeling Using Fully Convolutional Network. IEEE Trans. Intell. Transp. Syst. 2017, 19, 254–262. [Google Scholar] [CrossRef]
  50. Akilan, T.; Wu, Q.J.; Safaei, A.; Huo, J.; Yang, Y. A 3D CNN-LSTM-Based Image-to-Image Foreground Segmentation. IEEE Trans. Intell. Transp. Syst. 2019, 21, 959–971. [Google Scholar] [CrossRef]
  51. Shahbaz, A.; Jo, K.-H. Improved Change Detector Using Dual-Camera Sensors for Intelligent Video Surveillances. IEEE Sens. J. 2020, 21, 11435–11442. [Google Scholar] [CrossRef]
  52. Putro, M.D.; Nguyen, D.-L.; Jo, K.-H. An Efficient Face Detector on a CPU Using Dual-Camera Sensors for Intelligent Video Surveillances. IEEE Sens. J. 2021, 22, 565–574. [Google Scholar] [CrossRef]
  53. Rahmaniar, W.; Haq, Q.M.U.; Lin, T.-L. Wide Range Head Pose Estimation Using a Single RGB Camera for Intelligent Surveillance. IEEE Sens. J. 2022, 22, 11112–11121. [Google Scholar] [CrossRef]
  54. Satta, R.; Fumera, G.; Roli, F. Fast person re-identification based on dIVSimilarity representations. Pattern Recognit. Lett. 2012, 33, 1838–1848. [Google Scholar] [CrossRef]
  55. Tao, D.; Jin, L.; Wang, Y.; Yuan, Y.; Li, X. Person Re-Identification by Regularized Smoothing KIVS Metric Learning. IEEE Trans. Circuits Syst. Video Technol. 2013, 23, 1675–1685. [Google Scholar] [CrossRef]
  56. Tao, D.; Jin, L.; Wang, Y.; Li, X. Person Reidentification by Minimum Classification Error-Based KIVS Metric Learning. IEEE Trans. Cybern. 2014, 45, 242–252. [Google Scholar] [CrossRef]
  57. An, L.; Yang, S.; Bhanu, B. Person Re-Identification by Robust Canonical Correlation Analysis. IEEE Signal Process. Lett. 2015, 22, 1103–1107. [Google Scholar] [CrossRef]
  58. Liu, H.; Feng, J.; Qi, M.; Jiang, J.; Yan, S. End-to-End Comparative Attention Networks for Person Re-Identification. IEEE Trans. Image Process. 2017, 26, 3492–3506. [Google Scholar] [CrossRef]
  59. Liu, H.; Jie, Z.; Jayashree, K.; Qi, M.; Jiang, J.; Yan, S.; Feng, J. Video-Based Person Re-Identification With Accumulative Motion Context. IEEE Trans. Circuits Syst. Video Technol. 2017, 28, 2788–2802. [Google Scholar] [CrossRef]
  60. Zeng, Z.; Li, Z.; Cheng, D.; Zhang, H.; Zhan, K.; Yang, Y. Two-Stream Multirate Recurrent Neural Network for Video-Based Pedestrian Reidentification. IEEE Trans. Ind. Informa. 2017, 14, 3179–3186. [Google Scholar] [CrossRef]
  61. Almasawa, M.O.; Elrefaei, L.A.; Moria, K. A Survey on Deep Learning-Based Person Re-Identification Systems. IEEE Access 2019, 7, 175228–175247. [Google Scholar] [CrossRef]
  62. Kang, J.K.; Lee, M.B.; Yoon, H.S.; Park, K.R. AS-RIG: Adaptive Selection of Reconstructed Input by Generator or Interpolation for Person Re-Identification in Cross-Modality Visible and Thermal Images. IEEE Access 2021, 9, 12055–12066. [Google Scholar] [CrossRef]
  63. Liu, M.; Zhao, J.; Zhou, Y.; Zhu, H.; Yao, R.; Chen, Y. Survey for person re-identification based on coarse-to-fine feature learning. Multimed. Tools Appl. 2022, 81, 21939–21973. [Google Scholar] [CrossRef]
  64. Uddin, K.; Bhuiyan, A.; Bappee, F.K.; Islam, M.; Hasan, M. Person Re-Identification with RGB-D and RGB-IR Sensors: A Comprehensive Survey. Sensors 2023, 23, 1504. [Google Scholar] [CrossRef]
  65. Welsh, B.C.; Farrington, D.P. Effects of closed-circuit television on crime. Ann. Am. Acad. Polit. Soc. Sci. 2003, 587, 110–135. [Google Scholar] [CrossRef]
  66. Welsh, B.C.; Farrington, D.P. Public Area CCTV and Crime Prevention: An Updated Systematic Review and Meta-Analysis. Justice Q. 2009, 26, 716–745. [Google Scholar] [CrossRef]
  67. Caplan, J.M.; Kennedy, L.W.; Petrossian, G. Police-monitored CCTV cameras in Newark, NJ: A quasi-experimental test of crime deterrence. J. Exp. Criminol. 2011, 7, 255–274. [Google Scholar] [CrossRef]
  68. Piza, E.L.; Caplan, J.M.; Kennedy, L.W. Analyzing the Influence of Micro-Level Factors on CCTV Camera Effect. J. Quant. Criminol. 2013, 30, 237–264. [Google Scholar] [CrossRef]
  69. Piza, E.L.; Caplan, J.M.; Kennedy, L.W.; Gilchrist, A.M. The effects of merging proactive CCTV monitoring with directed police patrol: A randomized controlled trial. J. Exp. Criminol. 2014, 11, 43–69. [Google Scholar] [CrossRef]
  70. Lim, H.; Wilcox, P. Crime-Reduction Effects of Open-street CCTV: Conditionality Considerations. Justice Q. 2016, 34, 597–626. [Google Scholar] [CrossRef]
  71. Piza, E.L.; Welsh, B.C.; Farrington, D.P.; Thomas, A.L. CCTV surveillance for crime prevention A 40-year systematic review with meta-analysis. Criminol. Public Policy 2019, 18, 135–159. [Google Scholar] [CrossRef]
  72. Idrees, H.; Shah, M.; Surette, R. Enhancing camera surveillance using computer vision: A research note. arXiv, 2018. [Google Scholar] [CrossRef]
  73. Chen, C.; Ray, S.; Mubarak, S. Automated monitoring for security camera networks: Promise from computer vision labs. Secur. J. 2021, 34, 389–409. [Google Scholar] [CrossRef]
  74. Thomas, A.L.; Piza, E.L.; Welsh, B.C.; Farrington, D.P. The internationalisation of cctv surveillance: Effects on crime and implications for emerging technologies. Int. J. Comp. Appl. Crim. Justice 2021, 46, 81–102. [Google Scholar] [CrossRef]
  75. Newton, E.; Sweeney, L.; Malin, B. Preserving privacy by de-identifying face images. IEEE Trans. Knowl. Data Eng. 2005, 17, 232–243. [Google Scholar] [CrossRef]
  76. Agrawal, P.; Narayanan, P.J. Person De-Identification in Videos. IEEE Trans. Circuits Syst. Video Technol. 2011, 21, 299–310. [Google Scholar] [CrossRef]
  77. Padilla-López, J.R.; Chaaraoui, A.A.; Flórez-Revuelta, F. Visual privacy protection methods: A survey. Expert Syst. Appl. 2015, 42, 4177–4195. [Google Scholar] [CrossRef]
  78. Ribaric, S.; Ariyaeeinia, A.; Pavesic, N. De-identification for privacy protection in multimedia content: A survey. Signal Process. Image Commun. 2016, 47, 131–151. [Google Scholar] [CrossRef]
  79. Ciftci, S.; Akyuz, A.O.; Ebrahimi, T. A Reliable and Reversible Image Privacy Protection Based on False Colors. IEEE Trans. Multimed. 2017, 20, 68–81. [Google Scholar] [CrossRef]
  80. Asghar, M.N.; Kanwal, N.; Lee, B.; Fleury, M.; Herbst, M.; Qiao, Y. Visual Surveillance Within the EU General Data Protection Regulation A Technology Perspective. IEEE Access 2019, 7, 111709–111726. [Google Scholar] [CrossRef]
  81. Shifa, A.; Asghar, M.N.; Fleury, M.; Kanwal, N.; Ansari, M.S.; Lee, B.; Herbst, M.; Qiao, Y. MuLViS: Multi-Level Encryption Based Security System for Surveillance Videos. IEEE Access 2020, 8, 177131–177155. [Google Scholar] [CrossRef]
  82. Hosny, K.M.; Zaki, M.A.; Hamza, H.M.; Fouda, M.M.; Lashin, N.A. Privacy Protection in Surveillance Videos Using Block Scrambling-Based Encryption and DCNN-Based Face Detection. IEEE Access 2022, 10, 106750–106769. [Google Scholar] [CrossRef]
  83. Liu, J.; Dai, P.; Han, G.; Sun, N. Combined CNN/RNN video privacy protection evaluation method for monitoring home scene violence. Comput. Electr. Eng. 2023, 106, 108614. [Google Scholar] [CrossRef]
  84. Snidaro, L.; Foresti, G. Real-time thresholding with Euler numbers. Pattern Recognit. Lett. 2003, 24, 1533–1544. [Google Scholar] [CrossRef]
  85. Wang, Xiaogang;Intelligent multi-camera video surveillance: A review. Pattern Recognit. Lett. 2013, 34, 3–19. [CrossRef]
  86. Kenk, V.S.; Mandeljc, R.; Kovačič, S.; Kristan, M.; Hajdinjak, M.; Perš, J. Visual re-identification across large, distributed camera networks. Image Vis. Comput. 2015, 34, 11–26. [Google Scholar] [CrossRef]
  87. Iguernaissi, R.; Merad, D.; Aziz, K.; Drap, P. People tracking in multi-camera systems: A review. Multimed. Tools Appl. 2018, 78, 10773–10793. [Google Scholar] [CrossRef]
  88. Olagoke, A.S.; Ibrahim, H.; Teoh, S.S. Literature Survey on Multi-Camera System and Its Application. IEEE Access 2020, 8, 172892–172922. [Google Scholar] [CrossRef]
  89. Liu, Y.; Kong, L.; Chen, G.; Xu, F.; Wang, Z. Light-weight AI and IoT collaboration for surveillance video pre-processing. J. Syst. Arch. 2020, 114, 101934. [Google Scholar] [CrossRef]
  90. Yu, J.; Chen, H.; Wu, K.; Zhou, T.; Cai, Z.; Liu, F. Centipede: Leveraging the Distributed Camera Crowd for Cooperative Video Data Storage. IEEE Internet Things J. 2021, 8, 16498–16509. [Google Scholar] [CrossRef]
  91. Yu, J.; Chen, H.; Wu, K.; Zhou, T.; Cai, Z.; Liu, F. EviChain: A scalable blockchain for accountable Intelligent Video Surveillances. Int. J. Intell. Syst. 2021, 37, 1454–1478. [Google Scholar] [CrossRef]
  92. logletlab.com. Available online: https://logletlab.com/?page=index&preload=library.get.1 (accessed on 1 September 2023).
Figure 1. Video Surveillance Market Forecast to 2028 [1].
Figure 1. Video Surveillance Market Forecast to 2028 [1].
Sensors 24 02240 g001
Figure 2. Calculation of weights by using SPLC.: The number of all possible paths from the end of the line to the sink is then calculated, and the two aforementioned numbers are multiplied to calculate the final weight of all the lines.
Figure 2. Calculation of weights by using SPLC.: The number of all possible paths from the end of the line to the sink is then calculated, and the two aforementioned numbers are multiplied to calculate the final weight of all the lines.
Sensors 24 02240 g002
Figure 3. Number of articles published in field of Intelligent Video Surveillance.
Figure 3. Number of articles published in field of Intelligent Video Surveillance.
Sensors 24 02240 g003
Figure 4. Relational diagram for global main path of academic articles.
Figure 4. Relational diagram for global main path of academic articles.
Sensors 24 02240 g004
Figure 5. Key-route main path of IVS articles.
Figure 5. Key-route main path of IVS articles.
Sensors 24 02240 g005
Figure 6. IVS in video cameras.
Figure 6. IVS in video cameras.
Sensors 24 02240 g006
Figure 7. IVS in background modeling.
Figure 7. IVS in background modeling.
Sensors 24 02240 g007
Figure 8. IVS in person reidentification.
Figure 8. IVS in person reidentification.
Sensors 24 02240 g008
Figure 9. IVS in closed-circuit television.
Figure 9. IVS in closed-circuit television.
Sensors 24 02240 g009
Figure 10. IVS in privacy, security, and protection.
Figure 10. IVS in privacy, security, and protection.
Sensors 24 02240 g010
Figure 11. IVS in multiple cameras.
Figure 11. IVS in multiple cameras.
Sensors 24 02240 g011
Figure 12. Growth curve of number of articles.
Figure 12. Growth curve of number of articles.
Sensors 24 02240 g012
Table 1. Top 20 journals in the field of IVS.
Table 1. Top 20 journals in the field of IVS.
G-Index RankingJournalg-Indexh-IndexActive YearsPapers after 2000
1IEEE Transactions on Circuits and Systems for Video Technology66401998~2022116
2IEEE Transactions on Image Processing60292003~202260
3Pattern Recognition Letters50232003~202265
4Pattern Recognition48222006~202362
5Computer Vision and Image Understanding43232006~202343
6IEEE Transactions on Pattern Analysis and Machine Intelligence42282000~202242
7Neurocomputing41222008~202265
8IEEE Transactions on Intelligent Transportation Systems39232011~202250
9Multimedia Tools and Applications39222010~2023211
10Image and Vision Computing39182002~202239
11IEEE Access37252013~2023159
12Sensors37242012~2023162
13Expert Systems with Applications35212008~202349
14IEEE Transactions on Multimedia34212005~202234
15IEEE Transactions on Information Forensics and Security31182013~202331
16Machine Vision and Applications28182006~202247
17Automation in Construction26162001~202226
18IEEE Internet of Things Journal25122018~202326
19IEEE Sensors Journal23132002~202229
20Journal of Visual Communication and Image Representation21142007~202247
    Total1363
Table 2. Topics, number of articles, growth curves, and word clouds.
Table 2. Topics, number of articles, growth curves, and word clouds.
ThemeKeywordsGrowth CurveWord Cloud
Group1 (573 papers)
ISSs in Video Camara
Detection (0.01)
Surveillance (0.01)
Video (0.01)
Deep (0.002)
Intelligent (0.002)
System (0.003)
Behavior (0.004)
Data (0.004)
Performance (0.004)
Method (0.005)
Sensors 24 02240 i001Sensors 24 02240 i002
Group2 (436 papers)
ISSs in Background Modeling
Model (0.01)
Objects (0.01)
Algorithm (0.01)
Background (0.012)
Dynamic (0.002)
Learning (0.002)
Real-time (0.002)
Segmentation (0.002)
Analysis (0.003)
Applications (0.003)
Sensors 24 02240 i003Sensors 24 02240 i004
Group3 (207 papers)
ISSs in Person
Re-Identification (PReID)
Person (0.01)
Re-identification (0.02)
Feature (0.01)
Different (0.01)
Datasets (0.01)
Results (0.004)
Propose (0.01)
Approach (0.004)
Challenging (0.003)
Matching (0.003)
Sensors 24 02240 i005Sensors 24 02240 i006
Group4 (177 papers)
ISSs in Closed-Circuit Television (CCTV)
CCTV (0.015)
Crime (0.01)
Effects (0.01)
Policy (0.01)
Evidence (0.001)
Safety (0.001)
Control (0.002)
Findings (0.003)
Research (0.004)
Public (0.004)
Sensors 24 02240 i007Sensors 24 02240 i008
Group5 (170 papers)
ISSs in Privacy security
and Protection
Privacy (0.01)
Protection (0.01)
Security (0.01)
Recognition (0.001)
Devices (0.002)
Framework (0.002)
Smart (0.002)
Storage (0.002)
Encryption (0.003)
Face (0.004)
Sensors 24 02240 i009Sensors 24 02240 i010
Group6 (147 papers)
ISSs in Multiple Camera
Cameras (0.011)
Monitoring (0.002)
Network (0.002)
Processing (0.002)
Multiple (0.003)
Problem (0.003)
View (0.003)
Image (0.004)
Paper (0.004)
Performance (0.004)
Sensors 24 02240 i011Sensors 24 02240 i012
Table 3. Research themes ranked seventh to ninth.
Table 3. Research themes ranked seventh to ninth.
TitlePapers
7IVS in Action Recognition 2006–2023108
8IVS in Face Recognition 2003–202398
9IVS in Cloud Computing 2010–202377
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, F.-L.; Chen, K.-Y.; Su, W.-H. Knowledge Development Trajectories of Intelligent Video Surveillance Domain: An Academic Study Based on Citation and Main Path Analysis. Sensors 2024, 24, 2240. https://doi.org/10.3390/s24072240

AMA Style

Huang F-L, Chen K-Y, Su W-H. Knowledge Development Trajectories of Intelligent Video Surveillance Domain: An Academic Study Based on Citation and Main Path Analysis. Sensors. 2024; 24(7):2240. https://doi.org/10.3390/s24072240

Chicago/Turabian Style

Huang, Fei-Lung, Kai-Ying Chen, and Wei-Hao Su. 2024. "Knowledge Development Trajectories of Intelligent Video Surveillance Domain: An Academic Study Based on Citation and Main Path Analysis" Sensors 24, no. 7: 2240. https://doi.org/10.3390/s24072240

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop