applsci-logo

Journal Browser

Journal Browser

Deep Learning for Object Detection and Tracking in Video Surveillance Applications

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Robotics and Automation".

Deadline for manuscript submissions: closed (20 October 2023) | Viewed by 3342

Special Issue Editors


E-Mail Website
Guest Editor
College of Computer Sciences and Engineering, King Fahd University of Petroleum and Minerals (KFUPM), Dhahran 31261, Saudi Arabia
Interests: computer vision and pattern recognition; cybersecurity; intelligent systems; bio-inspired computation; multimodal machine learning

E-Mail Website
Guest Editor
Electrical Engineering Department, King Fahd University of Petroleum and Minerals (KFUPM), Dhahran 31261, Saudi Arabia
Interests: computer vision; image processing; deep learning

E-Mail Website
Guest Editor
College of Computer Sciences and Engineering, King Fahd University of Petroleum and Minerals (KFUPM), Dhahran 31261, Saudi Arabia
Interests: computer vision; deep learning; intelligent transportation; cyber-physical systems security; adversarial machine learning

Special Issue Information

Dear Colleagues,

We are pleased to announce the launch of a new Special Issue on “Deep Learning for Object Detection and Tracking in Video Surveillance Applications”, to be published in the Applied Sciences journal (https://www.mdpi.com/journal/applsci).

With the ubiquitous availability of video cameras and advances in deep learning, the past few years have witnessed an immensely growing interest in intelligent video analytics from both academia and industry. The goal is to automatically analyze video content, detect objects, and track their temporal and spatial changes under challenging scenarios. This can provide invaluable higher-level insights that can benefit decision making and improve safety. We encourage submissions of original work related to utilizing deep learning approaches to provide end-to-end applications for object detection and tracking in videos or live streams. There is a wide range of potential indoor and outdoor video surveillance applications in business, industry, smart cities, and critical infrastructures, such as detecting and tracking vehicles and pedestrians for smart transportation, monitoring traffic jams and accidents, vehicle counting in smart parking spaces, detecting adversarial activities to ensure safety, monitoring people with health issues, analyzing facial expressions and detecting humans’ mood and emotional state, tracking eye and lip movements, recognizing human gestures, analyzing gait and recognizing walking persons, managing crowded areas and counting entrances and exits, tracking customers’ behavior and spotting shoplifters, analyzing games and forecasting winners, reidentifying people, improving recognition in the presence of occlusion, etc.

Topics of interest are related to deep learning approaches for object detection and tracking in various application domains, including but not limited to:

  • Smart cities;
  • Smart homes;
  • Smart healthcare;
  • Autonomous vehicles and smart parking systems;
  • Visual homeland security and surveillance;
  • Crowd management;
  • Human behavior, gesture, and sign language;
  • Human motion analysis and recognition;
  • Vision-based human–computer interaction;
  • Cognitive robots and navigation systems;
  • Safety and adversarial activity recognition;
  • Person tracking in retail stores;
  • Adversarial attacks against DL-based object detection and/or tracking;
  • Drone-based surveillance and monitoring;
  • Multi-modal fusion of video and other sensory data for surveillance;
  • Anomalous activities/events detection and tracking;
  • Edge-assisted DL-based visual surveillance.

All submissions will be subject to a peer reviewing process on the basis of relevance, significance, and technical and presentation quality, by at least two independent reviewers. Authors must conform to the guidelines available on the journal website.

Prof. Dr. El-Sayed El-Alfy
Dr. Motaz Alfarraj
Dr. Abdul Jabbar Siddiqui
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • object detection
  • object tracking
  • activity recognition
  • gesture and sign language
  • gait analysis and recognition
  • lip reader
  • video analytics
  • video surveillance
  • crowd management
  • smart transportation
  • smart parking
  • smart cities

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 4399 KiB  
Article
Fall Recognition Based on Time-Level Decision Fusion Classification
by Juyoung Kim, Beomseong Kim and Heesung Lee
Appl. Sci. 2024, 14(2), 709; https://doi.org/10.3390/app14020709 - 14 Jan 2024
Cited by 1 | Viewed by 1055
Abstract
We propose a vision-based fall detection algorithm using advanced deep learning models and fusion methods for smart safety management systems. By detecting falls through visual cues, it is possible to leverage existing surveillance cameras, thus minimizing the need for extensive additional equipment. Consequently, [...] Read more.
We propose a vision-based fall detection algorithm using advanced deep learning models and fusion methods for smart safety management systems. By detecting falls through visual cues, it is possible to leverage existing surveillance cameras, thus minimizing the need for extensive additional equipment. Consequently, we developed a cost-effective fall detection system. The proposed system consists of four modules: object detection, pose estimation, action recognition, and result fusion. Constructing the fall detection system involved the utilization of state-of-the-art (SOTA) models. In the fusion module, we experimented with various approaches, including voting, maximum, averaging, and probabilistic fusion. Notably, we observed a significant performance improvement with the use of probabilistic fusion. We employed the HAR-UP dataset to demonstrate this enhancement, achieving an average 0.84% increase in accuracy compared to the baseline, which did not incorporate fusion methods. By applying our proposed time-level ensemble and skeleton-based fall detection approach, coupled with the use of enhanced object detection and pose estimation modules, we substantially improved the robustness and accuracy of the system, particularly for fall detection in challenging scenarios. Full article
Show Figures

Figure 1

22 pages, 2380 KiB  
Article
Lightweight YOLOv5s Human Ear Recognition Based on MobileNetV3 and Ghostnet
by Yanmin Lei, Dong Pan, Zhibin Feng and Junru Qian
Appl. Sci. 2023, 13(11), 6667; https://doi.org/10.3390/app13116667 - 30 May 2023
Cited by 3 | Viewed by 1563
Abstract
Ear recognition is a biometric identification technology based on human ear feature information, which can not only detect the human ear in the picture but also determine whose human ear it is, so human identity can be verified by human ear recognition. In [...] Read more.
Ear recognition is a biometric identification technology based on human ear feature information, which can not only detect the human ear in the picture but also determine whose human ear it is, so human identity can be verified by human ear recognition. In order to improve the real-time performance of the ear recognition algorithm and make it better for practical applications, a lightweight ear recognition method based on YOLOv5s is proposed. This method mainly includes the following steps: First, the MobileNetV3 lightweight network is used as the backbone network of the YOLOv5s ear recognition network. Second, using the idea of the Ghostnet network, the C3 module and Conv module in the YOLOv5s neck network are replaced by the C3Ghost module and GhostConv module, and then the YOLOv5s-MG ear recognition model is constructed. Third, three distinctive human ear datasets, CCU-DE, USTB, and EarVN1.0, are collected. Finally, the proposed lightweight ear recognition method is evaluated by four evaluation indexes: mAP value, model size, computational complexity (GFLOPs), and parameter quantity (params). Compared with the best results of YOLOv5s, YOLOv5s-V3, YOLOv5s-V2, and YOLOv5s-G methods on the CCU-DE, USTB, and EarVN1.0 three ear datasets, the params, GFLOPS, and model size of the proposed method YOLOv5s-MG are increased by 35.29%, 38.24%, and 35.57% respectively. The FPS of the proposed method, YOLOv5s-MG, is superior to the other four methods. The experimental results show that the proposed method has the performance of larger FPS, smaller model, fewer calculations, and fewer parameters under the condition of ensuring the accuracy of ear recognition, which can greatly improve the real-time performance and is feasible and effective. Full article
Show Figures

Figure 1

Back to TopTop