sensors-logo

Journal Browser

Journal Browser

Artificial Intelligence and Advances in Smart IoT

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Internet of Things".

Deadline for manuscript submissions: closed (30 April 2023) | Viewed by 42247

Special Issue Editors

Department of Computer Science, University of Swabi, Swabi 23430, Pakistan
Interests: software engineering; big data science; machine learning and deep learning; modelling
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computer Science, Qatar University, Doha P.O. Box 2713, Qatar
Interests: IoT; big data; U/V/M/E-commerce; security; artificial intelligence; mobile banking; E-learning; IT adoption
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Guest Editor
Department of Computer Science and Engineering of Systems, University of Zaragoza, 50001 Teruel, Spain
Interests: mobile applications for health and well-being; Internet of things; wearable sensors; big data; datamining; agent-based simulation and multi-agent systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) and Internet of Things (IoT) technologies are at the forefront of technological development worldwide. AI and IoT play very important roles in a variety of fields, such as smart cities, smart surveillance, and may others. Objects can communicate with individuals through IoT and smart devices in smart cities. With the help of different smart sensors, such as pollution detection sensors and environmental sensors, smart cities are becoming greener. The concept of green IoT is presented with the main goal of reducing the energy consumption of humans. Smart parking systems have been made using AI and computational capabilities, and help to detect vehicle occupancy and congestion. The use of IoT for parking helps to identify free parking slots. By using lightweight components, IoT implements network design and simpler data formats for the exchange of information in a way that is suitable for developing countries. For the collection of data from the environment, several sensors are deployed using network protocols.

The IoT is also currently playing a very important role in smart dairy farming. The world population is increasing day by day, and the demand for milk is increasing at pace with the population. Dairy product consumption is greater in developed countries than in developing countries. The IoT is also used for drowsy driver detection, which is very important to prevent road accidents. The goal is to construct a smart alert technique in order to make vehicles more intelligent, automatically avoiding driver impairment. Therefore, by using proper eye detection, drowsy driver alert systems have been proposed. IoT-based wireless sensor networks are also used for power quality control in smart grids. IoT-based power management systems require the data in grid from feeders. Wireless-sensor-network-based communication systems are used for smart monitoring and control in electric grids. IoT technologies are also used for household waste management systems for the purpose of a green, smart society—the aim is to efficiently manage the waste from every home using IoT technology.

This Special Issue invites original research articles and review articles that discover the incorporation of AI and advances in smart IoT and its applications. Research that considers technological and computational barriers to AI and smart IoT are particularly welcome.

Potential topics include, but are not limited to, the following:

  • Artificial intelligence and advances in smart Internet of Things;
  • AI-enabled smart Internet of Things;
  • AI and sustainable Internet of Things;
  • AI and smart cities;
  • Advances in Internet of Things;
  • Remote sensing and smart surveillance;
  • Intelligent Internet of Things;
  • Decision-support systems for Internet of Things;
  • Machine learning and Internet of Things;
  • Smart Industrial Internet of Things (IIoT).

Dr. Shah Nazir
Prof. Dr. Habib Ullah Khan
Dr. Iván García-Magariño
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (14 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

15 pages, 4469 KiB  
Article
An Adaptive Distributed Denial of Service Attack Prevention Technique in a Distributed Environment
by Basheer Riskhan, Halawati Abd Jalil Safuan, Khalid Hussain, Asma Abbas Hassan Elnour, Abdelzahir Abdelmaboud, Fazlullah Khan and Mahwish Kundi
Sensors 2023, 23(14), 6574; https://doi.org/10.3390/s23146574 - 21 Jul 2023
Cited by 2 | Viewed by 1107
Abstract
Cyberattacks in the modern world are sophisticated and can be undetected in a dispersed setting. In a distributed setting, DoS and DDoS attacks cause resource unavailability. This has motivated the scientific community to suggest effective approaches in distributed contexts as a means of [...] Read more.
Cyberattacks in the modern world are sophisticated and can be undetected in a dispersed setting. In a distributed setting, DoS and DDoS attacks cause resource unavailability. This has motivated the scientific community to suggest effective approaches in distributed contexts as a means of mitigating such attacks. Syn Flood is the most common sort of DDoS assault, up from 76% to 81% in Q2, according to Kaspersky’s Q3 report. Direct and indirect approaches are also available for launching DDoS attacks. While in a DDoS attack, controlled traffic is transmitted indirectly through zombies to reflectors to compromise the target host, in a direct attack, controlled traffic is sent directly to zombies in order to assault the victim host. Reflectors are uncompromised systems that only send replies in response to a request. To mitigate such assaults, traffic shaping and pushback methods are utilised. The SYN Flood Attack Detection and Mitigation Technique (SFaDMT) is an adaptive heuristic-based method we employ to identify DDoS SYN flood assaults. This study suggested an effective strategy to identify and resist the SYN assault. A decision support mechanism served as the foundation for the suggested (SFaDMT) approach. The suggested model was simulated, analysed, and compared to the most recent method using the OMNET simulator. The outcome demonstrates how the suggested fix improved detection. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

20 pages, 750 KiB  
Article
Dynamic Resource Optimization for Energy-Efficient 6G-IoT Ecosystems
by James Adu Ansere, Mohsin Kamal, Izaz Ahmad Khan and Muhammad Naveed Aman
Sensors 2023, 23(10), 4711; https://doi.org/10.3390/s23104711 - 12 May 2023
Cited by 6 | Viewed by 1581
Abstract
The problem of energy optimization for Internet of Things (IoT) devices is crucial for two reasons. Firstly, IoT devices powered by renewable energy sources have limited energy resources. Secondly, the aggregate energy requirement for these small and low-powered devices is translated into significant [...] Read more.
The problem of energy optimization for Internet of Things (IoT) devices is crucial for two reasons. Firstly, IoT devices powered by renewable energy sources have limited energy resources. Secondly, the aggregate energy requirement for these small and low-powered devices is translated into significant energy consumption. Existing works show that a significant portion of an IoT device’s energy is consumed by the radio sub-system. With the emerging sixth generation (6G), energy efficiency is a major design criterion for significantly increasing the IoT network’s performance. To solve this issue, this paper focuses on maximizing the energy efficiency of the radio sub-system. In wireless communications, the channel plays a major role in determining energy requirements. Therefore, a mixed-integer nonlinear programming problem is formulated to jointly optimize power allocation, sub-channel allocation, user selection, and the activated remote radio units (RRUs) in a combinatorial approach according to the channel conditions. Although it is an NP-hard problem, the optimization problem is solved through fractional programming properties, converting it into an equivalent tractable and parametric form. The resulting problem is then solved optimally by using the Lagrangian decomposition method and an improved Kuhn–Munkres algorithm. The results show that the proposed technique significantly improves the energy efficiency of IoT systems as compared to the state-of-the-art work. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

15 pages, 3058 KiB  
Article
Video Process Mining and Model Matching for Intelligent Development: Conformance Checking
by Shuang Chen, Minghao Zou, Rui Cao, Ziqi Zhao and Qingtian Zeng
Sensors 2023, 23(8), 3812; https://doi.org/10.3390/s23083812 - 07 Apr 2023
Cited by 1 | Viewed by 1415
Abstract
Traditional business process-extraction models mainly rely on structured data such as logs, which are difficult to apply to unstructured data such as images and videos, making it impossible to perform process extractions in many data scenarios. Moreover, the generated process model lacks analysis [...] Read more.
Traditional business process-extraction models mainly rely on structured data such as logs, which are difficult to apply to unstructured data such as images and videos, making it impossible to perform process extractions in many data scenarios. Moreover, the generated process model lacks analysis consistency of the process model, resulting in a single understanding of the process model. To solve these two problems, a method of extracting process models from videos and analyzing the consistency of process models is proposed. Video data are widely used to capture the actual performance of business operations and are key sources of business data. Video data preprocessing, action placement and recognition, predetermined models, and conformance verification are all included in a method for extracting a process model from videos and analyzing the consistency between the process model and the predefined model. Finally, the similarity was calculated using graph edit distances and adjacency relationships (GED_NAR). The experimental results showed that the process model mined from the video was better in line with how the business was actually carried out than the process model derived from the noisy process logs. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

18 pages, 3020 KiB  
Article
Customer Analysis Using Machine Learning-Based Classification Algorithms for Effective Segmentation Using Recency, Frequency, Monetary, and Time
by Asmat Ullah, Muhammad Ismail Mohmand, Hameed Hussain, Sumaira Johar, Inayat Khan, Shafiq Ahmad, Haitham A. Mahmoud and Shamsul Huda
Sensors 2023, 23(6), 3180; https://doi.org/10.3390/s23063180 - 16 Mar 2023
Cited by 3 | Viewed by 4794
Abstract
Customer segmentation has been a hot topic for decades, and the competition among businesses makes it more challenging. The recently introduced Recency, Frequency, Monetary, and Time (RFMT) model used an agglomerative algorithm for segmentation and a dendrogram for clustering, which solved the problem. [...] Read more.
Customer segmentation has been a hot topic for decades, and the competition among businesses makes it more challenging. The recently introduced Recency, Frequency, Monetary, and Time (RFMT) model used an agglomerative algorithm for segmentation and a dendrogram for clustering, which solved the problem. However, there is still room for a single algorithm to analyze the data’s characteristics. The proposed novel approach model RFMT analyzed Pakistan’s largest e-commerce dataset by introducing k-means, Gaussian, and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) beside agglomerative algorithms for segmentation. The cluster is determined through different cluster factor analysis methods, i.e., elbow, dendrogram, silhouette, Calinsky–Harabasz, Davies–Bouldin, and Dunn index. They finally elected a stable and distinctive cluster using the state-of-the-art majority voting (mode version) technique, which resulted in three different clusters. Besides all the segmentation, i.e., product categories, year-wise, fiscal year-wise, and month-wise, the approach also includes the transaction status and seasons-wise segmentation. This segmentation will help the retailer improve customer relationships, implement good strategies, and improve targeted marketing. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

11 pages, 3461 KiB  
Communication
Accurate Image Multi-Class Classification Neural Network Model with Quantum Entanglement Approach
by Farina Riaz, Shahab Abdulla, Hajime Suzuki, Srinjoy Ganguly, Ravinesh C. Deo and Susan Hopkins
Sensors 2023, 23(5), 2753; https://doi.org/10.3390/s23052753 - 02 Mar 2023
Cited by 6 | Viewed by 2884
Abstract
Quantum machine learning (QML) has attracted significant research attention over the last decade. Multiple models have been developed to demonstrate the practical applications of the quantum properties. In this study, we first demonstrate that the previously proposed quanvolutional neural network (QuanvNN) using a [...] Read more.
Quantum machine learning (QML) has attracted significant research attention over the last decade. Multiple models have been developed to demonstrate the practical applications of the quantum properties. In this study, we first demonstrate that the previously proposed quanvolutional neural network (QuanvNN) using a randomly generated quantum circuit improves the image classification accuracy of a fully connected neural network against the Modified National Institute of Standards and Technology (MNIST) dataset and the Canadian Institute for Advanced Research 10 class (CIFAR-10) dataset from 92.0% to 93.0% and from 30.5% to 34.9%, respectively. We then propose a new model referred to as a Neural Network with Quantum Entanglement (NNQE) using a strongly entangled quantum circuit combined with Hadamard gates. The new model further improves the image classification accuracy of MNIST and CIFAR-10 to 93.8% and 36.0%, respectively. Unlike other QML methods, the proposed method does not require optimization of the parameters inside the quantum circuits; hence, it requires only limited use of the quantum circuit. Given the small number of qubits and relatively shallow depth of the proposed quantum circuit, the proposed method is well suited for implementation in noisy intermediate-scale quantum computers. While promising results were obtained by the proposed method when applied to the MNIST and CIFAR-10 datasets, a test against a more complicated German Traffic Sign Recognition Benchmark (GTSRB) dataset degraded the image classification accuracy from 82.2% to 73.4%. The exact causes of the performance improvement and degradation are currently an open question, prompting further research on the understanding and design of suitable quantum circuits for image classification neural networks for colored and complex data. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

7 pages, 237 KiB  
Article
Reservoir Lithology Identification Based on Multicore Ensemble Learning and Multiclassification Algorithm Based on Noise Detection Function
by Menglei Li and Chaomo Zhang
Sensors 2023, 23(4), 1781; https://doi.org/10.3390/s23041781 - 05 Feb 2023
Viewed by 956
Abstract
Reservoir lithology identification is an important part of well logging interpretation. The accuracy of identification affects the subsequent exploration and development work, such as reservoir division and reserve prediction. Correct reservoir lithology identification has important geological significance. In this paper, the wavelet threshold [...] Read more.
Reservoir lithology identification is an important part of well logging interpretation. The accuracy of identification affects the subsequent exploration and development work, such as reservoir division and reserve prediction. Correct reservoir lithology identification has important geological significance. In this paper, the wavelet threshold method will be used to preliminarily reduce the noise of the curve, and then the MKBoost-MC model will be used to identify the reservoir lithology. It is found that the prediction accuracy of MKBoost-MC is higher than that of the traditional SVM algorithm, and though the operation of MKBoost-MC takes a long time, the speed of MKBoost-MC reservoir lithology identification is much higher than that of manual processing. The accuracy of MKBoost-MC for reservoir lithology recognition can reach the application standard. For the unbalanced distribution of lithology types, the MKBoost-MC algorithm can be effectively suppressed. Finally, the MKBoost-MC reservoir lithology identification method has good applicability and practicality to the lithology identification problem. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
18 pages, 8480 KiB  
Article
IoMT-Enabled Computer-Aided Diagnosis of Pulmonary Embolism from Computed Tomography Scans Using Deep Learning
by Mudasir Khan, Pir Masoom Shah, Izaz Ahmad Khan, Saif ul Islam, Zahoor Ahmad, Faheem Khan and Youngmoon Lee
Sensors 2023, 23(3), 1471; https://doi.org/10.3390/s23031471 - 28 Jan 2023
Cited by 8 | Viewed by 2181
Abstract
The Internet of Medical Things (IoMT) has revolutionized Ambient Assisted Living (AAL) by interconnecting smart medical devices. These devices generate a large amount of data without human intervention. Learning-based sophisticated models are required to extract meaningful information from this massive surge of data. [...] Read more.
The Internet of Medical Things (IoMT) has revolutionized Ambient Assisted Living (AAL) by interconnecting smart medical devices. These devices generate a large amount of data without human intervention. Learning-based sophisticated models are required to extract meaningful information from this massive surge of data. In this context, Deep Neural Network (DNN) has been proven to be a powerful tool for disease detection. Pulmonary Embolism (PE) is considered the leading cause of death disease, with a death toll of 180,000 per year in the US alone. It appears due to a blood clot in pulmonary arteries, which blocks the blood supply to the lungs or a part of the lung. An early diagnosis and treatment of PE could reduce the mortality rate. Doctors and radiologists prefer Computed Tomography (CT) scans as a first-hand tool, which contain 200 to 300 images of a single study for diagnosis. Most of the time, it becomes difficult for a doctor and radiologist to maintain concentration going through all the scans and giving the correct diagnosis, resulting in a misdiagnosis or false diagnosis. Given this, there is a need for an automatic Computer-Aided Diagnosis (CAD) system to assist doctors and radiologists in decision-making. To develop such a system, in this paper, we proposed a deep learning framework based on DenseNet201 to classify PE into nine classes in CT scans. We utilized DenseNet201 as a feature extractor and customized fully connected decision-making layers. The model was trained on the Radiological Society of North America (RSNA)-Pulmonary Embolism Detection Challenge (2020) Kaggle dataset and achieved promising results of 88%, 88%, 89%, and 90% in terms of the accuracy, sensitivity, specificity, and Area Under the Curve (AUC), respectively. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

14 pages, 2193 KiB  
Article
Assessing the Role of AI-Based Smart Sensors in Smart Cities Using AHP and MOORA
by Habib Ullah Khan and Shah Nazir
Sensors 2023, 23(1), 494; https://doi.org/10.3390/s23010494 - 02 Jan 2023
Cited by 3 | Viewed by 2203
Abstract
We know that in today’s advanced world, artificial intelligence (AI) and machine learning (ML)-grounded methodologies are playing a very optimistic role in performing difficult and time-consuming activities very conveniently and quickly. However, for the training and testing of these procedures, the main factor [...] Read more.
We know that in today’s advanced world, artificial intelligence (AI) and machine learning (ML)-grounded methodologies are playing a very optimistic role in performing difficult and time-consuming activities very conveniently and quickly. However, for the training and testing of these procedures, the main factor is the availability of a huge amount of data, called big data. With the emerging techniques of the Internet of Everything (IoE) and the Internet of Things (IoT), it is very feasible to collect a large volume of data with the help of smart and intelligent sensors. Based on these smart sensing devices, very innovative and intelligent hardware components can be made for prediction and recognition purposes. A detailed discussion was carried out on the development and employment of various detectors for providing people with effective services, especially in the case of smart cities. With these devices, a very healthy and intelligent environment can be created for people to live in safely and happily. With the use of modern technologies in integration with smart sensors, it is possible to use energy resources very productively. Smart vehicles can be developed to sense any emergency, to avoid injuries and fatal accidents. These sensors can be very helpful in management and monitoring activities for the enhancement of productivity. Several significant aspects are obtained from the available literature, and significant articles are selected from the literature to properly examine the uses of sensor technology for the development of smart infrastructure. The analytical hierarchy process (AHP) is used to give these attributes weights. Finally, the weights are used with the multi-objective optimization on the basis of ratio analysis (MOORA) technique to provide the different options in their order of importance. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

14 pages, 1638 KiB  
Article
Improving Convergence Speed of Bat Algorithm Using Multiple Pulse Emissions along Multiple Directions
by Waqar Younas, Gauhar Ali, Naveed Ahmad, Qamar Abbas, Muhammad Talha Masood, Asim Munir and Mohammed ElAffendi
Sensors 2022, 22(23), 9513; https://doi.org/10.3390/s22239513 - 05 Dec 2022
Cited by 1 | Viewed by 1288
Abstract
Metaheuristic algorithms are effectively used in searching some optical solution space. for optical solution. It is basically the type of local search generalization that can provide useful solutions for issues related to optimization. Several benefits are associated with this type of algorithms due [...] Read more.
Metaheuristic algorithms are effectively used in searching some optical solution space. for optical solution. It is basically the type of local search generalization that can provide useful solutions for issues related to optimization. Several benefits are associated with this type of algorithms due to that such algorithms can be better to solve many issues in an effective way. To provide fast and accurate solutions to huge range of complex issues is one main benefit metaheuristic algorithms. Some metaheuristic algorithms are effectively used to classify the problems and BAT Algorithm (BA) is one of them is more popular in use to sort out issues related to optimization of theoretical and realistic. Sometimes BA fails to find global optima and gets stuck in local optima because of the absence of investigation and manipulation. We have improved the BA to boost its local searching ability and diminish the premature problem. An improved equation of search with more necessary information through the search is set for the generation of the solution. Test set of benchmark functions are utilized to verify the proposed method’s performance. The results of simulation showed that proposed methods are best optimal solution as compare to others. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

22 pages, 524 KiB  
Article
Malware Detection in Internet of Things (IoT) Devices Using Deep Learning
by Sharjeel Riaz, Shahzad Latif, Syed Muhammad Usman, Syed Sajid Ullah, Abeer D. Algarni, Amanullah Yasin, Aamir Anwar, Hela Elmannai and Saddam Hussain
Sensors 2022, 22(23), 9305; https://doi.org/10.3390/s22239305 - 29 Nov 2022
Cited by 8 | Viewed by 4735
Abstract
Internet of Things (IoT) devices usage is increasing exponentially with the spread of the internet. With the increasing capacity of data on IoT devices, these devices are becoming venerable to malware attacks; therefore, malware detection becomes an important issue in IoT devices. An [...] Read more.
Internet of Things (IoT) devices usage is increasing exponentially with the spread of the internet. With the increasing capacity of data on IoT devices, these devices are becoming venerable to malware attacks; therefore, malware detection becomes an important issue in IoT devices. An effective, reliable, and time-efficient mechanism is required for the identification of sophisticated malware. Researchers have proposed multiple methods for malware detection in recent years, however, accurate detection remains a challenge. We propose a deep learning-based ensemble classification method for the detection of malware in IoT devices. It uses a three steps approach; in the first step, data is preprocessed using scaling, normalization, and de-noising, whereas in the second step, features are selected and one hot encoding is applied followed by the ensemble classifier based on CNN and LSTM outputs for detection of malware. We have compared results with the state-of-the-art methods and our proposed method outperforms the existing methods on standard datasets with an average accuracy of 99.5%. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

12 pages, 1590 KiB  
Article
Multi-Class Skin Lesions Classification Using Deep Features
by Muhammad Usama, M. Asif Naeem and Farhaan Mirza
Sensors 2022, 22(21), 8311; https://doi.org/10.3390/s22218311 - 29 Oct 2022
Cited by 4 | Viewed by 1896
Abstract
Skin cancer classification is a complex and time-consuming task. Existing approaches use segmentation to improve accuracy and efficiency, but due to different sizes and shapes of lesions, segmentation is not a suitable approach. In this research study, we proposed an improved automated system [...] Read more.
Skin cancer classification is a complex and time-consuming task. Existing approaches use segmentation to improve accuracy and efficiency, but due to different sizes and shapes of lesions, segmentation is not a suitable approach. In this research study, we proposed an improved automated system based on hybrid and optimal feature selections. Firstly, we balanced our dataset by applying three different transformation techniques, which include brightness, sharpening, and contrast enhancement. Secondly, we retrained two CNNs, Darknet53 and Inception V3, using transfer learning. Thirdly, the retrained models were used to extract deep features from the dataset. Lastly, optimal features were selected using moth flame optimization (MFO) to overcome the curse of dimensionality. This helped us in improving accuracy and efficiency of our model. We achieved 95.9%, 95.0%, and 95.8% on cubic SVM, quadratic SVM, and ensemble subspace discriminants, respectively. We compared our technique with state-of-the-art approach. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

17 pages, 1865 KiB  
Article
Efficient Top-K Identical Frequent Itemsets Mining without Support Threshold Parameter from Transactional Datasets Produced by IoT-Based Smart Shopping Carts
by Saif Ur Rehman, Noha Alnazzawi, Jawad Ashraf, Javed Iqbal and Shafiullah Khan
Sensors 2022, 22(20), 8063; https://doi.org/10.3390/s22208063 - 21 Oct 2022
Cited by 4 | Viewed by 1735
Abstract
Internet of Things (IoT)-backed smart shopping carts are generating an extensive amount of data in shopping markets around the world. This data can be cleaned and utilized for setting business goals and strategies. Artificial intelligence (AI) methods are used to efficiently extract meaningful [...] Read more.
Internet of Things (IoT)-backed smart shopping carts are generating an extensive amount of data in shopping markets around the world. This data can be cleaned and utilized for setting business goals and strategies. Artificial intelligence (AI) methods are used to efficiently extract meaningful patterns or insights from such huge amounts of data or big data. One such technique is Association Rule Mining (ARM) which is used to extract strategic information from the data. The crucial step in ARM is Frequent Itemsets Mining (FIM) followed by association rule generation. The FIM process starts by tuning the support threshold parameter from the user to produce the number of required frequent patterns. To perform the FIM process, the user applies hit and trial methods to rerun the aforesaid routine in order to receive the required number of patterns. The research community has shifted its focus towards the development of top-K most frequent patterns not using the support threshold parameter tuned by the user. Top-K most frequent patterns mining is considered a harder task than user-tuned support-threshold-based FIM. One of the reasons why top-K most frequent patterns mining techniques are computationally intensive is the fact that they produce a large number of candidate itemsets. These methods also do not use any explicit pruning mechanism apart from the internally auto-maintained support threshold parameter. Therefore, we propose an efficient TKIFIs Miner algorithm that uses depth-first search strategy for top-K identical frequent patterns mining. The TKIFIs Miner uses specialized one- and two-itemsets-based pruning techniques for topmost patterns mining. Comparative analysis is performed on special benchmark datasets, for example, Retail with 16,469 items, T40I10D100K and T10I4D100K with 1000 items each, etc. The evaluation results have proven that the TKIFIs Miner is at the top of the line, compared to recently available topmost patterns mining methods not using the support threshold parameter. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

Review

Jump to: Research

34 pages, 754 KiB  
Review
Systematic and Comprehensive Review of Clustering and Multi-Target Tracking Techniques for LiDAR Point Clouds in Autonomous Driving Applications
by Muhammad Adnan, Giulia Slavic, David Martin Gomez, Lucio Marcenaro and Carlo Regazzoni
Sensors 2023, 23(13), 6119; https://doi.org/10.3390/s23136119 - 03 Jul 2023
Cited by 1 | Viewed by 2543
Abstract
Autonomous vehicles (AVs) rely on advanced sensory systems, such as Light Detection and Ranging (LiDAR), to function seamlessly in intricate and dynamic environments. LiDAR produces highly accurate 3D point clouds, which are vital for the detection, classification, and tracking of multiple targets. A [...] Read more.
Autonomous vehicles (AVs) rely on advanced sensory systems, such as Light Detection and Ranging (LiDAR), to function seamlessly in intricate and dynamic environments. LiDAR produces highly accurate 3D point clouds, which are vital for the detection, classification, and tracking of multiple targets. A systematic review and classification of various clustering and Multi-Target Tracking (MTT) techniques are necessary due to the inherent challenges posed by LiDAR data, such as density, noise, and varying sampling rates. As part of this study, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) methodology was employed to examine the challenges and advancements in MTT techniques and clustering for LiDAR point clouds within the context of autonomous driving. Searches were conducted in major databases such as IEEE Xplore, ScienceDirect, SpringerLink, ACM Digital Library, and Google Scholar, utilizing customized search strategies. We identified and critically reviewed 76 relevant studies based on rigorous screening and evaluation processes, assessing their methodological quality, data handling adequacy, and reporting compliance. As a result of this comprehensive review and classification, we were able to provide a detailed overview of current challenges, research gaps, and advancements in clustering and MTT techniques for LiDAR point clouds, thus contributing to the field of autonomous driving. Researchers and practitioners working in the field of autonomous driving will benefit from this study, which was characterized by transparency and reproducibility on a systematic basis. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

34 pages, 1905 KiB  
Review
Network Threat Detection Using Machine/Deep Learning in SDN-Based Platforms: A Comprehensive Analysis of State-of-the-Art Solutions, Discussion, Challenges, and Future Research Direction
by Naveed Ahmed, Asri bin Ngadi, Johan Mohamad Sharif, Saddam Hussain, Mueen Uddin, Muhammad Siraj Rathore, Jawaid Iqbal, Maha Abdelhaq, Raed Alsaqour, Syed Sajid Ullah and Fatima Tul Zuhra
Sensors 2022, 22(20), 7896; https://doi.org/10.3390/s22207896 - 17 Oct 2022
Cited by 11 | Viewed by 9438
Abstract
A revolution in network technology has been ushered in by software defined networking (SDN), which makes it possible to control the network from a central location and provides an overview of the network’s security. Despite this, SDN has a single point of failure [...] Read more.
A revolution in network technology has been ushered in by software defined networking (SDN), which makes it possible to control the network from a central location and provides an overview of the network’s security. Despite this, SDN has a single point of failure that increases the risk of potential threats. Network intrusion detection systems (NIDS) prevent intrusions into a network and preserve the network’s integrity, availability, and confidentiality. Much work has been done on NIDS but there are still improvements needed in reducing false alarms and increasing threat detection accuracy. Recently advanced approaches such as deep learning (DL) and machine learning (ML) have been implemented in SDN-based NIDS to overcome the security issues within a network. In the first part of this survey paper, we offer an introduction to the NIDS theory, as well as recent research that has been conducted on the topic. After that, we conduct a thorough analysis of the most recent ML- and DL-based NIDS approaches to ensure reliable identification of potential security risks. Finally, we focus on the opportunities and difficulties that lie ahead for future research on SDN-based ML and DL for NIDS. Full article
(This article belongs to the Special Issue Artificial Intelligence and Advances in Smart IoT)
Show Figures

Figure 1

Back to TopTop