Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,214)

Search Parameters:
Keywords = automated connection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 11948 KiB  
Article
Image-Based Shrimp Aquaculture Monitoring
by Beatriz Correia, Osvaldo Pacheco, Rui J. M. Rocha and Paulo L. Correia
Sensors 2025, 25(1), 248; https://doi.org/10.3390/s25010248 (registering DOI) - 4 Jan 2025
Viewed by 115
Abstract
Shrimp farming is a growing industry, and automating certain processes within aquaculture tanks is becoming increasingly important to improve efficiency. This paper proposes an image-based system designed to address four key tasks in an aquaculture tank with Penaeus vannamei: estimating shrimp length [...] Read more.
Shrimp farming is a growing industry, and automating certain processes within aquaculture tanks is becoming increasingly important to improve efficiency. This paper proposes an image-based system designed to address four key tasks in an aquaculture tank with Penaeus vannamei: estimating shrimp length and weight, counting shrimps, and evaluating feed pellet food attractiveness. A setup was designed, including a camera connected to a Raspberry Pi computer, to capture high-quality images around a feeding plate during feeding moments. A dataset composed of 1140 images was captured over multiple days and different times of the day, under varying lightning conditions. This dataset has been used to train a segmentation model, which was employed to detect and filter shrimps in optimal positions for dimensions estimation. Promising results were achieved. For length estimation, the proposed method achieved a mean absolute percentage error (MAPE) of 1.56%, and width estimation resulted in a MAPE of 0.15%. These dimensions were then used to estimate the shrimp’s weight. Shrimp counting also yielded results with an average MAPE of 7.17%, ensuring a satisfactory estimation of the population in the field of view of the image sensor. The paper also proposes two approaches to evaluate pellet attractiveness, relying on a qualitative analysis due to the challenges of defining suitable quantitative metrics. The results were influenced by environmental conditions, highlighting the need for further investigation. The image capture and analysis prototype proposed in this paper provides a foundation for an adaptable system that can be scaled across multiple tanks, enabling efficient, automated monitoring. Additionally, it could also be adapted to monitor other species raised in similar aquaculture environments. Full article
(This article belongs to the Section Smart Agriculture)
Show Figures

Figure 1

30 pages, 6901 KiB  
Article
EPRNG: Effective Pseudo-Random Number Generator on the Internet of Vehicles Using Deep Convolution Generative Adversarial Network
by Chenyang Fei, Xiaomei Zhang, Dayu Wang, Haomin Hu, Rong Huang and Zejie Wang
Information 2025, 16(1), 21; https://doi.org/10.3390/info16010021 - 3 Jan 2025
Viewed by 311
Abstract
With the increasing connectivity and automation on the Internet of Vehicles, safety, security, and privacy have become stringent challenges. In the last decade, several cryptography-based protocols have been proposed as intuitive solutions to protect vehicles from information leakage and intrusions. Before generating the [...] Read more.
With the increasing connectivity and automation on the Internet of Vehicles, safety, security, and privacy have become stringent challenges. In the last decade, several cryptography-based protocols have been proposed as intuitive solutions to protect vehicles from information leakage and intrusions. Before generating the encryption keys, a random number generator (RNG) plays an important component in cybersecurity. Several deep learning-based RNGs have been deployed to train the initial value and generate pseudo-random numbers. However, interference from actual unpredictable driving environments renders the system unreliable for its low-randomness outputs. Furthermore, dynamics in the training process make these methods subject to training instability and pattern collapse by overfitting. In this paper, we propose an Effective Pseudo-Random Number Generator (EPRNG) which exploits a deep convolution generative adversarial network (DCGAN)-based approach using our processed vehicle datasets and entropy-driven stopping method-based training processes for the generation of pseudo-random numbers. Our model starts from the vehicle data source to stitch images and add noise to enhance the entropy of the images and then inputs them into our network. In addition, we design an entropy-driven stopping method that enables our model training to stop at the optimal epoch so as to prevent overfitting. The results of the evaluation indicate that our entropy-driven stopping method can effectively generate pseudo-random numbers in a DCGAN. Our numerical experiments on famous test suites (NIST, ENT) demonstrate the effectiveness of the developed approach in high-quality random number generation for the IoV. Furthermore, the PRNGs are successfully applied to image encryption, and the performance metrics of the encryption are close to ideal values. Full article
Show Figures

Graphical abstract

17 pages, 1299 KiB  
Article
Security Evaluation of Provably Secure ECC-Based Anonymous Authentication and Key Agreement Scheme for IoT
by Kisung Park, Myeonghyun Kim and Youngho Park
Sensors 2025, 25(1), 237; https://doi.org/10.3390/s25010237 - 3 Jan 2025
Viewed by 205
Abstract
The proliferation of the Internet of Things (IoT) has worsened the challenge of maintaining data and user privacy. IoT end devices, often deployed in unsupervised environments and connected to open networks, are susceptible to physical tampering and various other security attacks. Thus, robust, [...] Read more.
The proliferation of the Internet of Things (IoT) has worsened the challenge of maintaining data and user privacy. IoT end devices, often deployed in unsupervised environments and connected to open networks, are susceptible to physical tampering and various other security attacks. Thus, robust, efficient authentication and key agreement (AKA) protocols are essential to protect data privacy during exchanges between end devices and servers. The previous work in “Provably Secure ECC-Based Anonymous Authentication and Key Agreement for IoT” proposed a novel AKA scheme for secure IoT environments. They claimed their protocol offers comprehensive security features, guarding against numerous potential flaws while achieving session key security. However, this paper demonstrates through logical and mathematical analyses that the previous work is vulnerable to various attacks. We conducted a security analysis using the extended Canetti and Krawczyk (eCK) model, which is widely employed in security evaluations. This model considers scenarios where an attacker has complete control over the network, including the ability to intercept, modify, and delete messages, while also accounting for the potential exposure of ephemeral private keys. Furthermore, we show that their scheme fails to meet critical security requirements and relies on flawed security assumptions. We prove our findings using the automated validation of internet security protocols and applications, a widely recognized formal verification tool. To strengthen attack resilience, we propose several recommendations for the advancement of more robust and efficient AKA protocols specifically designed for IoT environments. Full article
Show Figures

Figure 1

15 pages, 2408 KiB  
Article
Dual-Stage AI Model for Enhanced CT Imaging: Precision Segmentation of Kidney and Tumors
by Nalan Karunanayake, Lin Lu, Hao Yang, Pengfei Geng, Oguz Akin, Helena Furberg, Lawrence H. Schwartz and Binsheng Zhao
Tomography 2025, 11(1), 3; https://doi.org/10.3390/tomography11010003 - 3 Jan 2025
Viewed by 171
Abstract
Objectives: Accurate kidney and tumor segmentation of computed tomography (CT) scans is vital for diagnosis and treatment, but manual methods are time-consuming and inconsistent, highlighting the value of AI automation. This study develops a fully automated AI model using vision transformers (ViTs) and [...] Read more.
Objectives: Accurate kidney and tumor segmentation of computed tomography (CT) scans is vital for diagnosis and treatment, but manual methods are time-consuming and inconsistent, highlighting the value of AI automation. This study develops a fully automated AI model using vision transformers (ViTs) and convolutional neural networks (CNNs) to detect and segment kidneys and kidney tumors in Contrast-Enhanced (CECT) scans, with a focus on improving sensitivity for small, indistinct tumors. Methods: The segmentation framework employs a ViT-based model for the kidney organ, followed by a 3D UNet model with enhanced connections and attention mechanisms for tumor detection and segmentation. Two CECT datasets were used: a public dataset (KiTS23: 489 scans) and a private institutional dataset (Private: 592 scans). The AI model was trained on 389 public scans, with validation performed on the remaining 100 scans and external validation performed on all 592 private scans. Tumors were categorized by TNM staging as small (≤4 cm) (KiTS23: 54%, Private: 41%), medium (>4 cm to ≤7 cm) (KiTS23: 24%, Private: 35%), and large (>7 cm) (KiTS23: 22%, Private: 24%) for detailed evaluation. Results: Kidney and kidney tumor segmentations were evaluated against manual annotations as the reference standard. The model achieved a Dice score of 0.97 ± 0.02 for kidney organ segmentation. For tumor detection and segmentation on the KiTS23 dataset, the sensitivities and average false-positive rates per patient were as follows: 0.90 and 0.23 for small tumors, 1.0 and 0.08 for medium tumors, and 0.96 and 0.04 for large tumors. The corresponding Dice scores were 0.84 ± 0.11, 0.89 ± 0.07, and 0.91 ± 0.06, respectively. External validation on the private data confirmed the model’s effectiveness, achieving the following sensitivities and average false-positive rates per patient: 0.89 and 0.15 for small tumors, 0.99 and 0.03 for medium tumors, and 1.0 and 0.01 for large tumors. The corresponding Dice scores were 0.84 ± 0.08, 0.89 ± 0.08, and 0.92 ± 0.06. Conclusions: The proposed model demonstrates consistent and robust performance in segmenting kidneys and kidney tumors of various sizes, with effective generalization to unseen data. This underscores the model’s significant potential for clinical integration, offering enhanced diagnostic precision and reliability in radiological assessments. Full article
(This article belongs to the Section Artificial Intelligence in Medical Imaging)
Show Figures

Figure 1

22 pages, 1153 KiB  
Systematic Review
Energy Inefficiency in IoT Networks: Causes, Impact, and a Strategic Framework for Sustainable Optimisation
by Ziyad Almudayni, Ben Soh, Halima Samra and Alice Li
Electronics 2025, 14(1), 159; https://doi.org/10.3390/electronics14010159 - 2 Jan 2025
Viewed by 284
Abstract
The Internet of Things (IoT) has vast potential to drive connectivity and automation across various sectors, yet energy inefficiency remains a critical barrier to achieving sustainable, high-performing networks. This study aims to identify and address the primary causes of energy wastage in IoT [...] Read more.
The Internet of Things (IoT) has vast potential to drive connectivity and automation across various sectors, yet energy inefficiency remains a critical barrier to achieving sustainable, high-performing networks. This study aims to identify and address the primary causes of energy wastage in IoT systems, proposing a framework to optimise energy consumption and improve overall system performance. A comprehensive literature review was conducted, focusing on studies from 2010 onwards across major databases, resulting in the identification of eleven key factors driving energy inefficiency: offloading, scheduling, latency, changing topology, load balancing, node deployment, resource management, congestion, clustering, routing, and limited bandwidth. The impact of each factor on energy usage was analysed, leading to a proposed framework that incorporates optimised communication protocols (such as CoAP and MQTT), adaptive fuzzy logic systems, and bio-inspired algorithms to streamline resource management and enhance network stability. This framework presents actionable strategies to improve IoT energy efficiency, extend device lifespan, and reduce operational costs. By addressing these energy inefficiency challenges, this study provides a path forward for more sustainable IoT systems, emphasising the need for continued research into experimental validations, context-aware solutions, and AI-driven energy management to ensure scalable and resilient IoT deployment. Full article
Show Figures

Figure 1

21 pages, 2475 KiB  
Article
Optimization of Energy Consumption in Voice Assistants Through AI-Enabled Cache Implementation: Development and Evaluation of a Metric
by Alber Oswaldo Montoya Benitez, Álvaro Suárez Sarmiento, Elsa María Macías López and Jorge Herrera-Ramirez
Technologies 2025, 13(1), 19; https://doi.org/10.3390/technologies13010019 - 2 Jan 2025
Viewed by 267
Abstract
Intelligent systems developed under the Internet of Things (IoT) paradigm offer solutions for various social and productive scenarios. Voice assistants (VAs), as part of IoT-based systems, facilitate task execution in a simple and automated manner, from entertainment to critical activities. Lithium batteries often [...] Read more.
Intelligent systems developed under the Internet of Things (IoT) paradigm offer solutions for various social and productive scenarios. Voice assistants (VAs), as part of IoT-based systems, facilitate task execution in a simple and automated manner, from entertainment to critical activities. Lithium batteries often power these devices. However, their energy consumption can be high due to the need to remain in continuous listening mode and the time it takes to search for and deliver responses from the Internet. This work proposes the implementation of a VA through Artificial Intelligence (AI) training and using cache memory to minimize response time and reduce energy consumption. First, the difference in energy consumption between VAs in active and passive states is experimentally verified. Subsequently, a communication architecture and a model representing the behavior of VAs are presented, from which a metric is developed to evaluate the energy consumption of these devices. The cache-enabled prototype shows a reduction in response time and energy expenditure (comparing the results of cloud-based VA and cache-based VA), several times lower according to the developed metric, demonstrating the effectiveness of the proposed system. This development could be a viable solution for areas with limited power sources, low coverage, and mobility situations that affect internet connectivity. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

21 pages, 9923 KiB  
Article
Trust Region Policy Learning for Adaptive Drug Infusion with Communication Networks in Hypertensive Patients
by Mai The Vu, Seong Han Kim, Ha Le Nhu Ngoc Thanh, Majid Roohi and Tuan Hai Nguyen
Mathematics 2025, 13(1), 136; https://doi.org/10.3390/math13010136 - 1 Jan 2025
Viewed by 265
Abstract
In the field of biomedical engineering, the issue of drug delivery constitutes a multifaceted and demanding endeavor for healthcare professionals. The intravenous administration of pharmacological agents to patients and the normalization of average arterial blood pressure (AABP) to desired thresholds represents a prevalent [...] Read more.
In the field of biomedical engineering, the issue of drug delivery constitutes a multifaceted and demanding endeavor for healthcare professionals. The intravenous administration of pharmacological agents to patients and the normalization of average arterial blood pressure (AABP) to desired thresholds represents a prevalent approach employed within clinical settings. The automated closed-loop infusion of vasoactive drugs for the purpose of modulating blood pressure (BP) in patients suffering from acute hypertension has been the focus of rigorous investigation in recent years. In previous works where model-based and fuzzy controllers are used to control AABP, model-based controllers rely on the precise mathematical model, while fuzzy controllers entail complexity due to rule sets. To overcome these challenges, this paper presents an adaptive closed-loop drug delivery system to control AABP by adjusting the infusion rate, as well as a communication time delay (CTD) for analyzing the wireless connectivity and interruption in transferring feedback data as a new insight. Firstly, a nonlinear backstepping controller (NBC) is developed to control AABP by continuously adjusting vasoactive drugs using real-time feedback. Secondly, a model-free deep reinforcement learning (MF-DRL) algorithm is integrated into the NBC to adjust dynamically the coefficients of the controller. Besides the various analyses such as normal condition (without CTD strategy), stability, and hybrid noise, a CTD analysis is implemented to illustrate the functionality of the system in a wireless manner and interruption in real-time feedback data. Full article
(This article belongs to the Special Issue Artificial Intelligence for Biomedical Applications)
Show Figures

Figure 1

17 pages, 1647 KiB  
Article
A Multi-Player Framework for Sustainable Traffic Optimization in the Era of Digital Transportation
by Areti Kotsi, Ioannis Politis, Emmanouil Chaniotakis and Evangelos Mitsakis
Infrastructures 2025, 10(1), 6; https://doi.org/10.3390/infrastructures10010006 - 30 Dec 2024
Viewed by 286
Abstract
Nowadays, traffic management challenges in the era of digital transport are rising, as the interactions of various stakeholders providing such technologies play a pivotal role in shaping traffic dynamics. The objective of this paper was to present a game-theory-based framework for modeling and [...] Read more.
Nowadays, traffic management challenges in the era of digital transport are rising, as the interactions of various stakeholders providing such technologies play a pivotal role in shaping traffic dynamics. The objective of this paper was to present a game-theory-based framework for modeling and optimizing urban traffic in road networks, considering the co-existence and interactions of different players composed of drivers of conventional vehicles, central governing authorities with traffic management capabilities, and competitive or cooperative connected mobility private service providers. The scope of this work was to explore and present the outcomes of diverse mixed equilibrium conditions in the road network of the city of Thessaloniki (Greece), integrating the principles of user equilibrium, system optimum, and Cournot oligopoly. The impacts of varying network attributes were systematically analyzed to provide quantitative indicators representing the overall network performance. Analysis of the results provided insights into the sensitivity and the resilience of the road network under various prevalence schemes of drivers of conventional vehicles, representing the user equilibrium characteristics, or drivers relying on traffic guidance provided by a central governing authority, representing the system optimum principles as well as the cooperation and competition schemes of private connected mobility providers with certain market shares in the network. Full article
Show Figures

Figure 1

14 pages, 2358 KiB  
Review
The Role of AMPS in Parkinson’s Disease Management: Scoping Review and Meta-Analysis
by Roberto Tedeschi, Danilo Donati and Federica Giorgi
Bioengineering 2025, 12(1), 21; https://doi.org/10.3390/bioengineering12010021 (registering DOI) - 29 Dec 2024
Viewed by 284
Abstract
Background: Automated Mechanical Peripheral Stimulation (AMPS) is emerging as a potential therapeutic tool for managing motor and non-motor symptoms in individuals with Parkinson’s disease (PD), particularly in terms of improving gait, balance, and autonomic regulation. This scoping review aims to synthesize current evidence [...] Read more.
Background: Automated Mechanical Peripheral Stimulation (AMPS) is emerging as a potential therapeutic tool for managing motor and non-motor symptoms in individuals with Parkinson’s disease (PD), particularly in terms of improving gait, balance, and autonomic regulation. This scoping review aims to synthesize current evidence on AMPS’s effectiveness for these outcomes. Methods: A review was conducted on MEDLINE, Cochrane Central, Scopus, PEDro, and Web of Science. Studies were included if they examined AMPS interventions for PD patients and reported outcomes related to gait, balance, neurological function, or autonomic regulation. Data extraction focused on study design, intervention details, sample characteristics, and key outcomes. Quality was assessed using the PEDro and RoB-2 scales. Results: Six randomized controlled trials met the inclusion criteria. AMPS consistently improved gait kinematic parameters, including step length and gait velocity, and reduced gait asymmetry. In addition, increased brain connectivity between motor regions was correlated with enhanced gait speed, suggesting neuroplastic effects. Some studies reported improved autonomic regulation, with enhanced heart rate variability and blood pressure stability. However, limitations such as small sample sizes, short follow-ups, and varied protocols affected the consistency of the findings. Conclusions: AMPS shows potential as an adjunct therapy for PD, improving gait, balance, and possibly autonomic function. These preliminary findings will support further research into establishing standardized protocols, confirming long-term efficacy, and exploring AMPS’s impact on non-motor symptoms. With robust evidence, AMPS could complement existing PD management strategies and improve patient outcomes. Full article
(This article belongs to the Special Issue Advances in Physical Therapy and Rehabilitation)
Show Figures

Figure 1

16 pages, 15354 KiB  
Article
Experimental Activity with a Rover for Underwater Inspection
by Erika Ottaviano, Agnese Testa, Pierluigi Rea, Marco Saccucci, Assunta Pelliccio and Maurizio Ruggiu
Actuators 2025, 14(1), 7; https://doi.org/10.3390/act14010007 - 28 Dec 2024
Viewed by 376
Abstract
The inspection of underwater structures is often hampered by harsh environmental conditions, limited access, high costs, and inherent safety issues. This paper focuses on the use of an underwater rover to implement automated imaging techniques for facilitating inspections. The application of such techniques [...] Read more.
The inspection of underwater structures is often hampered by harsh environmental conditions, limited access, high costs, and inherent safety issues. This paper focuses on the use of an underwater rover to implement automated imaging techniques for facilitating inspections. The application of such techniques can significantly improve the state of monitoring, reduce operational complexity, and partially offset the financial burden of periodic inspections. To date, there has been very little work on image-based techniques for detecting and quantifying the extent of structural damage, particularly in the submerged part of marine structures. This work seeks to address this knowledge gap through the development and performance evaluation of underwater photogrammetry. The development of the research has been carried out using the FIFISH V6 rover with the Brave 7 camera, which has all the characteristics required for successful photogrammetry. To connect the sensor to the rover, a support was designed accordingly. Finally, experimental photogrammetry tests of an anchor were carried out and compared, both in and out of the sea environment, to validate the model presented. The results obtained so far confirm the validity of the proposed approach and encourage the future development of this apparatus for underwater inspections. Full article
Show Figures

Figure 1

17 pages, 7222 KiB  
Article
Extracting Regular Building Footprints Using Projection Histogram Method from UAV-Based 3D Models
by Yaoyao Ren, Xing Li, Fangyuqing Jin, Chunmei Li, Wei Liu, Erzhu Li and Lianpeng Zhang
ISPRS Int. J. Geo-Inf. 2025, 14(1), 6; https://doi.org/10.3390/ijgi14010006 - 28 Dec 2024
Viewed by 326
Abstract
Extracting building outlines from 3D models poses significant challenges stemming from the intricate diversity of structures and the complexity of urban scenes. Current techniques heavily rely on human expertise and involve repetitive, labor-intensive manual operations. To address these limitations, this paper presents an [...] Read more.
Extracting building outlines from 3D models poses significant challenges stemming from the intricate diversity of structures and the complexity of urban scenes. Current techniques heavily rely on human expertise and involve repetitive, labor-intensive manual operations. To address these limitations, this paper presents an innovative automatic technique for accurately extracting building footprints, particularly those with gable and hip roofs, directly from 3D data. Our methodology encompasses several key steps: firstly, we construct a triangulated irregular network (TIN) to capture the intricate geometry of the buildings. Subsequently, we employ 2D indexing and counting grids for efficient data processing and utilize a sophisticated connected component labeling algorithm to precisely identify the extents of the roofs. A single seed point is manually specified to initiate the process, from which we select the triangular facets representing the outer walls of the buildings. Utilizing the projection histogram method, these facets are grouped and processed to extract regular building footprints. Extensive experiments conducted on datasets from Nanjing and Wuhan demonstrate the remarkable accuracy of our approach. With mean intersection over union (mIOU) values of 99.2% and 99.4%, respectively, and F1 scores of 94.3% and 96.7%, our method proves to be both effective and robust in mapping building footprints from 3D real-scene data. This work represents a significant advancement in automating the extraction of building footprints from complex 3D scenes, with potential applications in urban planning, disaster response, and environmental monitoring. Full article
Show Figures

Figure 1

15 pages, 4143 KiB  
Article
Digitalized Optical Sensor Network for Intelligent Facility Monitoring
by Esther Renner, Lisa-Sophie Haerteis, Joachim Kaiser, Michael Villnow, Markus Richter, Torsten Thiel, Andreas Pohlkötter and Bernhard Schmauss
Photonics 2025, 12(1), 18; https://doi.org/10.3390/photonics12010018 - 28 Dec 2024
Viewed by 309
Abstract
Due to their inherent advantages, optical fiber sensors (OFSs) can substantially contribute to the monitoring and performance enhancement of energy infrastructure. However, optical fiber sensor systems often are standalone solutions and do not connect to the main energy infrastructure control systems. In this [...] Read more.
Due to their inherent advantages, optical fiber sensors (OFSs) can substantially contribute to the monitoring and performance enhancement of energy infrastructure. However, optical fiber sensor systems often are standalone solutions and do not connect to the main energy infrastructure control systems. In this paper, we propose a solution for the digitalization of an optical fiber sensor system realized by the Open Platform Communications Unified Architecture (OPC UA) protocol and the Internet of Things (IoT) platform Insights Hub. The optical fiber sensor system is based on bidirectional incoherent optical frequency domain reflectometry (biOFDR) and is used for the interrogation of fiber Bragg grating (FBG) arrays. To allow for an automated sensor identification and thus measurement procedure, an optical sensor identification marker based on a unique combination of fiber Bragg gratings (FBGs) is established. To demonstrate the abilities of the digitalized sensor network, a field test was performed in a power plant test facility of Siemens Energy. Temperature measurements of a packaged FBG sensor fiber were performed with a portable demonstrator, illustrating the system’s robustness and the comprehensive data processing stream from sensor value formation to the cloud. The realized network services promote sensor data quality, fusion, and modeling, expanding opportunities using digital twin technology. Full article
(This article belongs to the Special Issue Advanced Optical Fiber Sensors for Harsh Environment Applications)
Show Figures

Figure 1

22 pages, 981 KiB  
Article
Intelligent Platform for Automating Vulnerability Detection in Web Applications
by Diogo Moreira, João Pedro Seara, João Pedro Pavia and Carlos Serrão
Electronics 2025, 14(1), 79; https://doi.org/10.3390/electronics14010079 - 27 Dec 2024
Viewed by 430
Abstract
In a world increasingly dependent on technology and in an era where connectivity is omnipresent, Web applications have become an essential part of our everyday life. The evolution of these applications, combined with the exponential increase in the number of users, has brought [...] Read more.
In a world increasingly dependent on technology and in an era where connectivity is omnipresent, Web applications have become an essential part of our everyday life. The evolution of these applications, combined with the exponential increase in the number of users, has brought with it not only convenience but also significant challenges in terms of security. Ensuring the security of Web applications and their data is increasingly a priority for companies, although many companies lack the know-how, time, and money to do so. This research project studied and developed a system with the aim of automating the process of detecting vulnerabilities in Web applications by exploiting the benefits of the interoperability of the two forms of automation of the tool selected to carry out this analysis. The developed solution is low-cost and requires very little user intervention. In order to validate and evaluate the developed platform, experiments were carried out on applications with different types of vulnerabilities known in advance and on real applications. It is essential to guarantee the security of Web applications, and the developed system proved capable of automating the detection of vulnerability risks and returning the results in a relatively simple way for the user. Full article
(This article belongs to the Special Issue Research in Secure IoT-Edge-Cloud Computing Continuum)
Show Figures

Figure 1

20 pages, 3238 KiB  
Article
Enhanced Disc Herniation Classification Using Grey Wolf Optimization Based on Hybrid Feature Extraction and Deep Learning Methods
by Yasemin Sarı and Nesrin Aydın Atasoy
Tomography 2025, 11(1), 1; https://doi.org/10.3390/tomography11010001 - 26 Dec 2024
Viewed by 273
Abstract
Due to the increasing number of people working at computers in professional settings, the incidence of lumbar disc herniation is increasing. Background/Objectives: The early diagnosis and treatment of lumbar disc herniation is much more likely to yield favorable results, allowing the hernia to [...] Read more.
Due to the increasing number of people working at computers in professional settings, the incidence of lumbar disc herniation is increasing. Background/Objectives: The early diagnosis and treatment of lumbar disc herniation is much more likely to yield favorable results, allowing the hernia to be treated before it develops further. The aim of this study was to classify lumbar disc herniations in a computer-aided, fully automated manner using magnetic resonance images (MRIs). Methods: This study presents a hybrid method integrating residual network (ResNet50), grey wolf optimization (GWO), and machine learning classifiers such as multi-layer perceptron (MLP) and support vector machine (SVM) to improve classification performance. The proposed approach begins with feature extraction using ResNet50, a deep convolutional neural network known for its robust feature representation capabilities. ResNet50’s residual connections allow for effective training and high-quality feature extraction from input images. Following feature extraction, the GWO algorithm, inspired by the social hierarchy and hunting behavior of grey wolves, is employed to optimize the feature set by selecting the most relevant features. Finally, the optimized feature set is fed into machine learning classifiers (MLP and SVM) for classification. The use of various activation functions (e.g., ReLU, identity, logistic, and tanh) in MLP and various kernel functions (e.g., linear, rbf, sigmoid, and polynomial) in SVM allows for a thorough evaluation of the classifiers’ performance. Results: The proposed methodology demonstrates significant improvements in metrics such as accuracy, precision, recall, and F1 score, outperforming traditional approaches in several cases. These results highlight the effectiveness of combining deep learning-based feature extraction with optimization and machine learning classifiers. Conclusions: Compared to other methods, such as capsule networks (CapsNet), EfficientNetB6, and DenseNet169, the proposed ResNet50-GWO-SVM approach achieved superior performance across all metrics, including accuracy, precision, recall, and F1 score, demonstrating its robustness and effectiveness in classification tasks. Full article
Show Figures

Figure 1

34 pages, 4788 KiB  
Article
FFL-IDS: A Fog-Enabled Federated Learning-Based Intrusion Detection System to Counter Jamming and Spoofing Attacks for the Industrial Internet of Things
by Tayyab Rehman, Noshina Tariq, Farrukh Aslam Khan and Shafqat Ur Rehman
Sensors 2025, 25(1), 10; https://doi.org/10.3390/s25010010 - 24 Dec 2024
Viewed by 464
Abstract
The Internet of Things (IoT) contains many devices that can compute and communicate, creating large networks. Industrial Internet of Things (IIoT) represents a developed application of IoT, connecting with embedded technologies in production in industrial operational settings to offer sophisticated automation and real-time [...] Read more.
The Internet of Things (IoT) contains many devices that can compute and communicate, creating large networks. Industrial Internet of Things (IIoT) represents a developed application of IoT, connecting with embedded technologies in production in industrial operational settings to offer sophisticated automation and real-time decisions. Still, IIoT compels significant cybersecurity threats beyond jamming and spoofing, which could ruin the critical infrastructure. Developing a robust Intrusion Detection System (IDS) addresses the challenges and vulnerabilities present in these systems. Traditional IDS methods have achieved high detection accuracy but need improved scalability and privacy issues from large datasets. This paper proposes a Fog-enabled Federated Learning-based Intrusion Detection System (FFL-IDS) utilizing Convolutional Neural Network (CNN) that mitigates these limitations. This framework allows multiple parties in IIoT networks to train deep learning models with data privacy preserved and low-latency detection ensured using fog computing. The proposed FFL-IDS is validated on two datasets, namely the Edge-IIoTset, explicitly tailored to environments with IIoT, and CIC-IDS2017, comprising various network scenarios. On the Edge-IIoTset dataset, it achieved 93.4% accuracy, 91.6% recall, 88% precision, 87% F1 score, and 87% specificity for jamming and spoofing attacks. The system showed better robustness on the CIC-IDS2017 dataset, achieving 95.8% accuracy, 94.9% precision, 94% recall, 93% F1 score, and 93% specificity. These results establish the proposed framework as a scalable, privacy-preserving, high-performance solution for securing IIoT networks against sophisticated cyber threats across diverse environments. Full article
(This article belongs to the Special Issue AI Technology for Cybersecurity and IoT Applications)
Show Figures

Figure 1

Back to TopTop