Computer Vision for Surveillance

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (30 March 2022) | Viewed by 6063

Special Issue Editors


E-Mail Website
Guest Editor
E.T.S. Ingenieros Industriales, VISILAB Grupo de Visión y Sistemas Inteligentes, University of Castilla–La Mancha, 13071 Ciudad Real, Spain
Interests: computer vision; machine learning; image analysis; video analysis

E-Mail Website
Guest Editor
E.T.S. Ingeniería Informática, University of Seville, 41012 Sevilla, Spain
Interests: deep learning; object detection; human behavior

E-Mail Website
Guest Editor
E.T.S. Ingenieros Industriales, VISILAB Grupo de Visión y Sistemas Inteligentes, University of Castilla–La Mancha, 13071 Ciudad Real, Spain
Interests: computer vision; machine learning; pattern recognition; image analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

While automatic video surveillance has led to early successful research in tasks such as unattended baggage detection, loitering detection and virtual fences, new tasks are now being explored, such as violence detection, weapon detection, COVID-19-related applications, etc. Despite their complexity, such applications are being fueled by powerful deep learning methodologies. In most cases early detection is crucial to save lives and reduce risks. The purpose of this Special Issue is to provide an academic platform to publish high-quality research papers on the applications of innovative AI algorithms to video surveillance.

 

The potential topics of interest include but are not limited to the following:

  • Violence detection;
  • Aggressive behavior detection;
  • Fight detection;
  • Vandalism detection;
  • Firearm detection;
  • Knife detection;
  • Suspicious object detection;
  • Motion detection;
  • Object tracking;
  • Scene analysis;
  • COVID-19 applications (mask detection, thermal imaging, distancing, etc.);
  • Combination with other modalities or sensors (i.e., sound, multispectral, 3D, etc.);
  • Violence detection from mobile platforms (i.e., drones) or PTZ cameras;
  • Real-life applications;
  • Datasets.

Dr. Oscar Deniz Suarez
Dr. Juan Antonio Álvarez García
Dr. Noelia Vállez Enano
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • video surveillance
  • vision-based human activity recognition
  • violence detection
  • aggressive behavior detection
  • fight detection
  • gun detection
  • firearm detection
  • knife detection
  • mask detection

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 4657 KiB  
Article
ViolenceNet: Dense Multi-Head Self-Attention with Bidirectional Convolutional LSTM for Detecting Violence
by Fernando J. Rendón-Segador, Juan A. Álvarez-García, Fernando Enríquez and Oscar Deniz
Electronics 2021, 10(13), 1601; https://doi.org/10.3390/electronics10131601 - 03 Jul 2021
Cited by 31 | Viewed by 4580
Abstract
Introducing efficient automatic violence detection in video surveillance or audiovisual content monitoring systems would greatly facilitate the work of closed-circuit television (CCTV) operators, rating agencies or those in charge of monitoring social network content. In this paper we present a new deep learning [...] Read more.
Introducing efficient automatic violence detection in video surveillance or audiovisual content monitoring systems would greatly facilitate the work of closed-circuit television (CCTV) operators, rating agencies or those in charge of monitoring social network content. In this paper we present a new deep learning architecture, using an adapted version of DenseNet for three dimensions, a multi-head self-attention layer and a bidirectional convolutional long short-term memory (LSTM) module, that allows encoding relevant spatio-temporal features, to determine whether a video is violent or not. Furthermore, an ablation study of the input frames, comparing dense optical flow and adjacent frames subtraction and the influence of the attention layer is carried out, showing that the combination of optical flow and the attention mechanism improves results up to 4.4%. The conducted experiments using four of the most widely used datasets for this problem, matching or exceeding in some cases the results of the state of the art, reducing the number of network parameters needed (4.5 millions), and increasing its efficiency in test accuracy (from 95.6% on the most complex dataset to 100% on the simplest one) and inference time (less than 0.3 s for the longest clips). Finally, to check if the generated model is able to generalize violence, a cross-dataset analysis is performed, which shows the complexity of this approach: using three datasets to train and testing on the remaining one the accuracy drops in the worst case to 70.08% and in the best case to 81.51%, which points to future work oriented towards anomaly detection in new datasets. Full article
(This article belongs to the Special Issue Computer Vision for Surveillance)
Show Figures

Figure 1

Back to TopTop