Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = Dlib

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 63763 KiB  
Article
Computer-Aided Facial Soft Tissue Reconstruction with Computer Vision: A Modern Approach to Identifying Unknown Individuals
by Svenja Preuß, Sven Becker, Jasmin Rosenfelder and Dirk Labudde
Appl. Sci. 2025, 15(11), 6086; https://doi.org/10.3390/app15116086 - 28 May 2025
Viewed by 59
Abstract
Facial soft tissue reconstruction is an important tool in forensic investigations, especially when conventional identification methods are unsuccessful. This paper presents a digital workflow for facial reconstruction and identity verification using computer vision techniques applied to two forensic cases. The first case involves [...] Read more.
Facial soft tissue reconstruction is an important tool in forensic investigations, especially when conventional identification methods are unsuccessful. This paper presents a digital workflow for facial reconstruction and identity verification using computer vision techniques applied to two forensic cases. The first case involves a cold case from 1993, in which a manual reconstruction by Prof. Helmer was conducted in 1994. We digitally reconstructed the same individual using CAD software (Blender), enabling a direct comparison between manual and digital techniques. To date, the deceased remains unidentified. The second case, from 2021, involved a digitally reconstructed face that was later matched to a missing person through DNA analysis. Here, comparison material was available, including an official photograph. A police officer involved in the case noted a “striking resemblance” between the reconstruction and the photograph. To evaluate this subjective impression, we performed quantitative analyses using three face recognition models (Dlib-based method, VGG-Face, and GhostFaceNet). The models did not indicate significant similarity, highlighting a gap between human perception and algorithmic assessment. These findings suggest that current face recognition algorithms may not yet be fully suited to evaluating reconstructions, which tend to deviate in subtle but critical facial features. To achieve better facial recognition results, further research is required to generate more anatomically accurate and detailed reconstructions that align more closely with the sensitivity of AI-based identification systems. Full article
Show Figures

Figure 1

19 pages, 9180 KiB  
Article
Accurate Real-Time Live Face Detection Using Snapshot Spectral Imaging Method
by Zhihai Wang, Shuai Wang, Weixing Yu, Bo Gao, Chenxi Li and Tianxin Wang
Sensors 2025, 25(3), 952; https://doi.org/10.3390/s25030952 - 5 Feb 2025
Cited by 1 | Viewed by 1007
Abstract
Traditional facial recognition is realized by facial recognition algorithms based on 2D or 3D digital images and has been well developed and has found wide applications in areas related to identification verification. In this work, we propose a novel live face detection (LFD) [...] Read more.
Traditional facial recognition is realized by facial recognition algorithms based on 2D or 3D digital images and has been well developed and has found wide applications in areas related to identification verification. In this work, we propose a novel live face detection (LFD) method by utilizing snapshot spectral imaging technology, which takes advantage of the distinctive reflected spectra from human faces. By employing a computational spectral reconstruction algorithm based on Tikhonov regularization, a rapid and precise spectral reconstruction with a fidelity of over 99% for the color checkers and various types of “face” samples has been achieved. The flat face areas were extracted exactly from the “face” images with Dlib face detection and Euclidean distance selection algorithms. A large quantity of spectra were rapidly reconstructed from the selected areas and compiled into an extensive database. The convolutional neural network model trained on this database demonstrates an excellent capability for predicting different types of “faces” with an accuracy exceeding 98%, and, according to a series of evaluations, the system’s detection time consistently remained under one second, much faster than other spectral imaging LFD methods. Moreover, a pixel-level liveness detection test system is developed and a LFD experiment shows good agreement with theoretical results, which demonstrates the potential of our method to be applied in other recognition fields. The superior performance and compatibility of our method provide an alternative solution for accurate, highly integrated video LFD applications. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

18 pages, 1761 KiB  
Article
Computer Vision-Based Drowsiness Detection Using Handcrafted Feature Extraction for Edge Computing Devices
by Valerius Owen and Nico Surantha
Appl. Sci. 2025, 15(2), 638; https://doi.org/10.3390/app15020638 - 10 Jan 2025
Cited by 1 | Viewed by 1580
Abstract
Drowsy driving contributes to over 6000 fatal incidents annually in the US, underscoring the need for effective, non-intrusive drowsiness detection. This study seeks to address detection challenges, particularly in non-standard head positions. Our innovative approach leverages computer vision by combining facial feature detection [...] Read more.
Drowsy driving contributes to over 6000 fatal incidents annually in the US, underscoring the need for effective, non-intrusive drowsiness detection. This study seeks to address detection challenges, particularly in non-standard head positions. Our innovative approach leverages computer vision by combining facial feature detection using Dlib, head pose estimation with the HOPEnet model, and analyses of the percentage of eyelid closure over time (PERCLOS) and the percentage of mouth opening over time (POM). These are integrated with traditional machine learning models, such as Support Vector Machines, Random Forests, and XGBoost. These models were chosen for their ability to process detailed information from facial landmarks, head poses, PERCLOS, and POM. They achieved a high overall accuracy of 86.848% in detecting drowsiness, with a small overall model size of 5.05 MB and increased computational efficiency. The models were trained on the National Tsing Hua University Driver Drowsiness Detection Dataset, making them highly suitable for devices with a limited computational capacity. Compared to the baseline model from the literature, which achieved an accuracy of 84.82% and a larger overall model size of 37.82 MB, the method proposed in this research shows a notable improvement in the efficiency of the model with relatively similar accuracy. These findings provide a framework for future studies, potentially improving sleepiness detection systems and ultimately saving lives by enhancing road safety. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 8065 KiB  
Article
Drowsiness Detection in Drivers Using Facial Feature Analysis
by Ebenezer Essel, Fred Lacy, Fatema Albalooshi, Wael Elmedany and Yasser Ismail
Appl. Sci. 2025, 15(1), 20; https://doi.org/10.3390/app15010020 - 24 Dec 2024
Cited by 2 | Viewed by 2163
Abstract
Drowsiness has been recognized as a leading factor in road accidents worldwide. Despite considerable research in this area, this paper aims to improve the precision of drowsiness detection specifically for long-haul travel by employing the Dlib-based facial feature detection algorithm. This study proposes [...] Read more.
Drowsiness has been recognized as a leading factor in road accidents worldwide. Despite considerable research in this area, this paper aims to improve the precision of drowsiness detection specifically for long-haul travel by employing the Dlib-based facial feature detection algorithm. This study proposes two algorithms: a static and adaptive frame threshold. Both approaches utilize eye closure ratio (ECR) and mouth aperture ratio (MAR) parameters to determine the driver’s level of drowsiness. The static threshold method issues a warning when the ECR and/or MAR values reach specific thresholds. In this method, the ECR threshold is established at 0.15 and the MAR threshold at 0.4. The static threshold method demonstrated an accuracy of 89.4% and a sensitivity of 96.5% using 1000 images. The adaptive frame threshold algorithm uses a counter to monitor the number of consecutive frames that meet the drowsiness criteria before triggering a warning. Additionally, the number of consecutive frames required is adjusted dynamically over time to enhance detection accuracy and more accurately indicate a state of drowsiness. The adaptive frame threshold algorithm was tested using four 30 min videos, from a publicly available dataset achieving a maximum accuracy of 98.2% and a sensitivity of 64.3% with 500 images. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

25 pages, 6143 KiB  
Article
Dynamic Tracking and Real-Time Fall Detection Based on Intelligent Image Analysis with Convolutional Neural Network
by Ching-Bang Yao and Cheng-Tai Lu
Sensors 2024, 24(23), 7448; https://doi.org/10.3390/s24237448 - 22 Nov 2024
Cited by 1 | Viewed by 1680
Abstract
As many countries face rapid population aging, the supply of manpower for caregiving falls far short of the increasing demand for care. Therefore, if the care system can continuously recognize and track the care recipient and, at the first sign of a fall, [...] Read more.
As many countries face rapid population aging, the supply of manpower for caregiving falls far short of the increasing demand for care. Therefore, if the care system can continuously recognize and track the care recipient and, at the first sign of a fall, promptly analyze the image to accurately assess the circumstances of the fall, it would be highly critical. This study integrates the mobility of drones in conjunction with the Dlib HOG algorithm and intelligent fall posture analysis, aiming to achieve real-time tracking of care recipients. Additionally, the study improves and enhances the real-time multi-person action analysis feature of OpenPose to enhance its analytical capabilities for various fall scenarios, enabling accurate analysis of the approximate real-time situation when a care recipient falls. In the experimental results, the system’s identification accuracy for four fall directions is higher than that of Google Teachable Machine’s Pose Project training model. Particularly, there is the significant improvement in identifying backward falls, with the identification accuracy increasing from 70.35% to 95%. Furthermore, the identification accuracy for forward and leftward falls also increases by nearly 14%. Therefore, the experimental results demonstrate that the improved identification accuracy for the four fall directions in different scenarios exceeds 95%. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 4715 KiB  
Article
IoT-MFaceNet: Internet-of-Things-Based Face Recognition Using MobileNetV2 and FaceNet Deep-Learning Implementations on a Raspberry Pi-400
by Ahmad Saeed Mohammad, Thoalfeqar G. Jarullah, Musab T. S. Al-Kaltakchi, Jabir Alshehabi Al-Ani and Somdip Dey
J. Low Power Electron. Appl. 2024, 14(3), 46; https://doi.org/10.3390/jlpea14030046 - 5 Sep 2024
Cited by 2 | Viewed by 53137
Abstract
IoT applications revolutionize industries by enhancing operations, enabling data-driven decisions, and fostering innovation. This study explores the growing potential of IoT-based facial recognition for mobile devices, a technology rapidly advancing within the interconnected IoT landscape. The investigation proposes a framework called IoT-MFaceNet (Internet-of-Things-based [...] Read more.
IoT applications revolutionize industries by enhancing operations, enabling data-driven decisions, and fostering innovation. This study explores the growing potential of IoT-based facial recognition for mobile devices, a technology rapidly advancing within the interconnected IoT landscape. The investigation proposes a framework called IoT-MFaceNet (Internet-of-Things-based face recognition using MobileNetV2 and FaceNet deep-learning) utilizing pre-existing deep-learning methods, employing the MobileNetV2 and FaceNet algorithms on both ImageNet and FaceNet databases. Additionally, an in-house database is compiled, capturing data from 50 individuals via a web camera and 10 subjects through a smartphone camera. Pre-processing of the in-house database involves face detection using OpenCV’s Haar Cascade, Dlib’s CNN Face Detector, and Mediapipe’s Face. The resulting system demonstrates high accuracy in real-time and operates efficiently on low-powered devices like the Raspberry Pi 400. The evaluation involves the use of the multilayer perceptron (MLP) and support vector machine (SVM) classifiers. The system primarily functions as a closed set identification system within a computer engineering department at the College of Engineering, Mustansiriyah University, Iraq, allowing access exclusively to department staff for the department rapporteur room. The proposed system undergoes successful testing, achieving a maximum accuracy rate of 99.976%. Full article
Show Figures

Figure 1

22 pages, 4831 KiB  
Article
Association of Visual-Based Signals with Electroencephalography Patterns in Enhancing the Drowsiness Detection in Drivers with Obstructive Sleep Apnea
by Riaz Minhas, Nur Yasin Peker, Mustafa Abdullah Hakkoz, Semih Arbatli, Yeliz Celik, Cigdem Eroglu Erdem, Beren Semiz and Yuksel Peker
Sensors 2024, 24(8), 2625; https://doi.org/10.3390/s24082625 - 19 Apr 2024
Cited by 4 | Viewed by 2516
Abstract
Individuals with obstructive sleep apnea (OSA) face increased accident risks due to excessive daytime sleepiness. PERCLOS, a recognized drowsiness detection method, encounters challenges from image quality, eyewear interference, and lighting variations, impacting its performance, and requiring validation through physiological signals. We propose visual-based [...] Read more.
Individuals with obstructive sleep apnea (OSA) face increased accident risks due to excessive daytime sleepiness. PERCLOS, a recognized drowsiness detection method, encounters challenges from image quality, eyewear interference, and lighting variations, impacting its performance, and requiring validation through physiological signals. We propose visual-based scoring using adaptive thresholding for eye aspect ratio with OpenCV for face detection and Dlib for eye detection from video recordings. This technique identified 453 drowsiness (PERCLOS ≥ 0.3 || CLOSDUR ≥ 2 s) and 474 wakefulness episodes (PERCLOS < 0.3 and CLOSDUR < 2 s) among fifty OSA drivers in a 50 min driving simulation while wearing six-channel EEG electrodes. Applying discrete wavelet transform, we derived ten EEG features, correlated them with visual-based episodes using various criteria, and assessed the sensitivity of brain regions and individual EEG channels. Among these features, theta–alpha-ratio exhibited robust mapping (94.7%) with visual-based scoring, followed by delta–alpha-ratio (87.2%) and delta–theta-ratio (86.7%). Frontal area (86.4%) and channel F4 (75.4%) aligned most episodes with theta–alpha-ratio, while frontal, and occipital regions, particularly channels F4 and O2, displayed superior alignment across multiple features. Adding frontal or occipital channels could correlate all episodes with EEG patterns, reducing hardware needs. Our work could potentially enhance real-time drowsiness detection reliability and assess fitness to drive in OSA drivers. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

15 pages, 6566 KiB  
Article
Two-Stage Method for Clothing Feature Detection
by Xinwei Lyu, Xinjia Li, Yuexin Zhang and Wenlian Lu
Big Data Cogn. Comput. 2024, 8(4), 35; https://doi.org/10.3390/bdcc8040035 - 26 Mar 2024
Viewed by 2607
Abstract
The rapid expansion of e-commerce, particularly in the clothing sector, has led to a significant demand for an effective clothing industry. This study presents a novel two-stage image recognition method. Our approach distinctively combines human keypoint detection, object detection, and classification methods into [...] Read more.
The rapid expansion of e-commerce, particularly in the clothing sector, has led to a significant demand for an effective clothing industry. This study presents a novel two-stage image recognition method. Our approach distinctively combines human keypoint detection, object detection, and classification methods into a two-stage structure. Initially, we utilize open-source libraries, namely OpenPose and Dlib, for accurate human keypoint detection, followed by a custom cropping logic for extracting body part boxes. In the second stage, we employ a blend of Harris Corner, Canny Edge, and skin pixel detection integrated with VGG16 and support vector machine (SVM) models. This configuration allows the bounding boxes to identify ten unique attributes, encompassing facial features and detailed aspects of clothing. Conclusively, the experiment yielded an overall recognition accuracy of 81.4% for tops and 85.72% for bottoms, highlighting the efficacy of the applied methodologies in garment categorization. Full article
Show Figures

Figure 1

26 pages, 1060 KiB  
Article
Detection of Drowsiness among Drivers Using Novel Deep Convolutional Neural Network Model
by Fiaz Majeed, Umair Shafique, Mejdl Safran, Sultan Alfarhood and Imran Ashraf
Sensors 2023, 23(21), 8741; https://doi.org/10.3390/s23218741 - 26 Oct 2023
Cited by 21 | Viewed by 5934
Abstract
Detecting drowsiness among drivers is critical for ensuring road safety and preventing accidents caused by drowsy or fatigued driving. Research on yawn detection among drivers has great significance in improving traffic safety. Although various studies have taken place where deep learning-based approaches are [...] Read more.
Detecting drowsiness among drivers is critical for ensuring road safety and preventing accidents caused by drowsy or fatigued driving. Research on yawn detection among drivers has great significance in improving traffic safety. Although various studies have taken place where deep learning-based approaches are being proposed, there is still room for improvement to develop better and more accurate drowsiness detection systems using behavioral features such as mouth and eye movement. This study proposes a deep neural network architecture for drowsiness detection employing a convolutional neural network (CNN) for driver drowsiness detection. Experiments involve using the DLIB library to locate key facial points to calculate the mouth aspect ratio (MAR). To compensate for the small dataset, data augmentation is performed for the ‘yawning’ and ‘no_yawning’ classes. Models are trained and tested involving the original and augmented dataset to analyze the impact on model performance. Experimental results demonstrate that the proposed CNN model achieves an average accuracy of 96.69%. Performance comparison with existing state-of-the-art approaches shows better performance of the proposed model. Full article
(This article belongs to the Special Issue Fault-Tolerant Sensing Paradigms for Autonomous Vehicles)
Show Figures

Figure 1

16 pages, 6187 KiB  
Article
Identification of Driver Status Hazard Level and the System
by Jiayuan Gong, Shiwei Zhou and Wenbo Ren
Sensors 2023, 23(17), 7536; https://doi.org/10.3390/s23177536 - 30 Aug 2023
Viewed by 1397
Abstract
According to the survey statistics, most traffic accidents are caused by the driver’s behavior and status irregularities. Because there is no multi-level dangerous state grading system at home and abroad, this paper proposes a complex state grading system for real-time detection and dynamic [...] Read more.
According to the survey statistics, most traffic accidents are caused by the driver’s behavior and status irregularities. Because there is no multi-level dangerous state grading system at home and abroad, this paper proposes a complex state grading system for real-time detection and dynamic tracking of the driver’s state. The system uses OpenMV as the acquisition camera combined with the cradle head tracking system to collect the driver’s current driving image in real-time dynamically, combines the YOLOX algorithm with the OpenPose algorithm to judge the driver’s dangerous driving behavior by detecting unsafe objects in the cab and the driver’s posture, and combines the improved Retinaface face detection algorithm with the Dlib feature-point algorithm to discriminate the fatigue driving state of the driver. The experimental results show that the accuracy of the three driver danger levels (R1, R2, and R3) obtained by the proposed system reaches 95.8%, 94.5%, and 96.3%, respectively. The experimental results of this system have a specific practical significance in driver-distracted driving warnings. Full article
(This article belongs to the Special Issue Intelligent Sensors for Smart and Autonomous Vehicles)
Show Figures

Figure 1

28 pages, 4513 KiB  
Article
A Multi-Feature Fusion and Situation Awareness-Based Method for Fatigue Driving Level Determination
by Fei-Fei Wei, Tao Chi and Xuebo Chen
Electronics 2023, 12(13), 2884; https://doi.org/10.3390/electronics12132884 - 29 Jun 2023
Cited by 4 | Viewed by 2030
Abstract
The detection and evaluation of fatigue levels in drivers play a crucial role in reducing traffic accidents and improving the overall quality of life. However, existing studies in this domain often focus on fatigue detection alone, with limited research on fatigue level evaluation. [...] Read more.
The detection and evaluation of fatigue levels in drivers play a crucial role in reducing traffic accidents and improving the overall quality of life. However, existing studies in this domain often focus on fatigue detection alone, with limited research on fatigue level evaluation. These limitations include the use of single evaluation methods and relatively low accuracy rates. To address these issues, this paper introduces an innovative approach for determining fatigue driving levels. We employ the Dlib library and fatigue state detection algorithms to develop a novel method specifically designed to assess fatigue levels. Unlike conventional approaches, our method adopts a multi-feature fusion strategy, integrating fatigue features from the eyes, mouth, and head pose. By combining these features, we achieve a more precise evaluation of the driver’s fatigue state level. Additionally, we propose a comprehensive evaluation method based on the analytic hierarchy process (AHP) and fuzzy comprehensive evaluation, combined with situational prediction. This approach effectively evaluates the fatigue state level of drivers at specific moments or stages and provides accurate predictions. Furthermore, we optimize the gated recurrent unit (GRU) network using an enhanced marine predator algorithm (MAP), which results in significant improvements in predicting fatigue levels during situational prediction. Experimental results demonstrate a classification accuracy of 92% across various scenarios while maintaining real-time performance. In summary, this paper introduces a novel approach for determining fatigue driving levels through multi-feature fusion. We also incorporate AHP-fuzzy comprehensive evaluation and situational prediction techniques, enhancing the accuracy and reliability of fatigue level evaluation. This research holds both theoretical and practical significance in the field of fatigue driving. Full article
Show Figures

Figure 1

15 pages, 8253 KiB  
Article
Adaptive Driver Face Feature Fatigue Detection Algorithm Research
by Han Zheng, Yiding Wang and Xiaoming Liu
Appl. Sci. 2023, 13(8), 5074; https://doi.org/10.3390/app13085074 - 18 Apr 2023
Cited by 19 | Viewed by 3211
Abstract
Fatigued driving is one of the leading causes of traffic accidents, and detecting fatigued driving effectively is critical to improving driving safety. Given the variety and individual variability of the driving surroundings, the drivers’ states of weariness, and the uncertainty of the key [...] Read more.
Fatigued driving is one of the leading causes of traffic accidents, and detecting fatigued driving effectively is critical to improving driving safety. Given the variety and individual variability of the driving surroundings, the drivers’ states of weariness, and the uncertainty of the key characteristic factors, in this paper, we propose a deep-learning-based study of the MAX-MIN driver fatigue detection algorithm. First, the ShuffleNet V2K16 neural network is used for driver face recognition, which eliminates the influence of poor environmental adaptability in fatigue detection; second, ShuffleNet V2K16 is combined with Dlib to obtain the coordinates of driver face feature points; and finally, the values of EAR and MAR are obtained by comparing the first 100 frames of images to EAR-MAX and MAR-MIN. Our proposed method achieves 98.8% precision, 90.2% recall, and 94.3% F-Score in the actual driving scenario application. Full article
(This article belongs to the Special Issue Computation and Complex Data Processing Systems)
Show Figures

Figure 1

20 pages, 5767 KiB  
Article
Driver Emotion and Fatigue State Detection Based on Time Series Fusion
by Yucheng Shang, Mutian Yang, Jianwei Cui, Linwei Cui, Zizheng Huang and Xiang Li
Electronics 2023, 12(1), 26; https://doi.org/10.3390/electronics12010026 - 21 Dec 2022
Cited by 16 | Viewed by 4982
Abstract
Studies have shown that driver fatigue or unpleasant emotions significantly increase driving risks. Detecting driver emotions and fatigue states and providing timely warnings can effectively minimize the incidence of traffic accidents. However, existing models rarely combine driver emotion and fatigue detection, and there [...] Read more.
Studies have shown that driver fatigue or unpleasant emotions significantly increase driving risks. Detecting driver emotions and fatigue states and providing timely warnings can effectively minimize the incidence of traffic accidents. However, existing models rarely combine driver emotion and fatigue detection, and there is space to improve the accuracy of recognition. In this paper, we propose a non-invasive and efficient detection method for driver fatigue and emotional state, which is the first time to combine them in the detection of driver state. Firstly, the captured video image sequences are preprocessed, and Dlib (image open source processing library) is used to locate face regions and mark key points; secondly, facial features are extracted, and fatigue indicators, such as driver eye closure time (PERCLOS) and yawn frequency are calculated using the dual-threshold method and fused by mathematical methods; thirdly, an improved lightweight RM-Xception convolutional neural network is introduced to identify the driver’s emotional state; finally, the two indicators are fused based on time series to obtain a comprehensive score for evaluating the driver’s state. The results show that the fatigue detection algorithm proposed in this paper has high accuracy, and the accuracy of the emotion recognition network reaches an accuracy rate of 73.32% on the Fer2013 dataset. The composite score calculated based on time series fusion can comprehensively and accurately reflect the driver state in different environments and make a contribution to future research in the field of assisted safe driving. Full article
(This article belongs to the Topic Computer Vision and Image Processing)
Show Figures

Figure 1

11 pages, 1803 KiB  
Article
Effects of Image Quality on the Accuracy Human Pose Estimation and Detection of Eye Lid Opening/Closing Using Openpose and DLib
by Run Zhou Ye, Arun Subramanian, Daniel Diedrich, Heidi Lindroth, Brian Pickering and Vitaly Herasevich
J. Imaging 2022, 8(12), 330; https://doi.org/10.3390/jimaging8120330 - 19 Dec 2022
Cited by 5 | Viewed by 3859
Abstract
Objective: The application of computer models in continuous patient activity monitoring using video cameras is complicated by the capture of images of varying qualities due to poor lighting conditions and lower image resolutions. Insufficient literature has assessed the effects of image resolution, color [...] Read more.
Objective: The application of computer models in continuous patient activity monitoring using video cameras is complicated by the capture of images of varying qualities due to poor lighting conditions and lower image resolutions. Insufficient literature has assessed the effects of image resolution, color depth, noise level, and low light on the inference of eye opening and closing and body landmarks from digital images. Method: This study systematically assessed the effects of varying image resolutions (from 100 × 100 pixels to 20 × 20 pixels at an interval of 10 pixels), lighting conditions (from 42 to 2 lux with an interval of 2 lux), color-depths (from 16.7 M colors to 8 M, 1 M, 512 K, 216 K, 64 K, 8 K, 1 K, 729, 512, 343, 216, 125, 64, 27, and 8 colors), and noise levels on the accuracy and model performance in eye dimension estimation and body keypoint localization using the Dlib library and OpenPose with images from the Closed Eyes in the Wild and the COCO datasets, as well as photographs of the face captured at different light intensities. Results: The model accuracy and rate of model failure remained acceptable at an image resolution of 60 × 60 pixels, a color depth of 343 colors, a light intensity of 14 lux, and a Gaussian noise level of 4% (i.e., 4% of pixels replaced by Gaussian noise). Conclusions: The Dlib and OpenPose models failed to detect eye dimensions and body keypoints only at low image resolutions, lighting conditions, and color depths. Clinical Impact: Our established baseline threshold values will be useful for future work in the application of computer vision in continuous patient monitoring. Full article
Show Figures

Figure 1

16 pages, 3185 KiB  
Article
Structure-Based Virtual Screening and Functional Validation of Potential Hit Molecules Targeting the SARS-CoV-2 Main Protease
by Balasubramanian Moovarkumudalvan, Anupriya Madhukumar Geethakumari, Ramya Ramadoss, Kabir H. Biswas and Borbala Mifsud
Biomolecules 2022, 12(12), 1754; https://doi.org/10.3390/biom12121754 - 25 Nov 2022
Cited by 13 | Viewed by 2997
Abstract
The recent global health emergency caused by the coronavirus disease 2019 (COVID-19) pandemic has taken a heavy toll, both in terms of lives and economies. Vaccines against the disease have been developed, but the efficiency of vaccination campaigns worldwide has been variable due [...] Read more.
The recent global health emergency caused by the coronavirus disease 2019 (COVID-19) pandemic has taken a heavy toll, both in terms of lives and economies. Vaccines against the disease have been developed, but the efficiency of vaccination campaigns worldwide has been variable due to challenges regarding production, logistics, distribution and vaccine hesitancy. Furthermore, vaccines are less effective against new variants of the SARS-CoV-2 virus and vaccination-induced immunity fades over time. These challenges and the vaccines’ ineffectiveness for the infected population necessitate improved treatment options, including the inhibition of the SARS-CoV-2 main protease (Mpro). Drug repurposing to achieve inhibition could provide an immediate solution for disease management. Here, we used structure-based virtual screening (SBVS) to identify natural products (from NP-lib) and FDA-approved drugs (from e-Drug3D-lib and Drugs-lib) which bind to the Mpro active site with high-affinity and therefore could be designated as potential inhibitors. We prioritized nine candidate inhibitors (e-Drug3D-lib: Ciclesonide, Losartan and Telmisartan; Drugs-lib: Flezelastine, Hesperidin and Niceverine; NP-lib: three natural products) and predicted their half maximum inhibitory concentration using DeepPurpose, a deep learning tool for drug–target interactions. Finally, we experimentally validated Losartan and two of the natural products as in vitro Mpro inhibitors, using a bioluminescence resonance energy transfer (BRET)-based Mpro sensor. Our study suggests that existing drugs and natural products could be explored for the treatment of COVID-19. Full article
(This article belongs to the Section Biomacromolecules: Proteins, Nucleic Acids and Carbohydrates)
Show Figures

Graphical abstract

Back to TopTop