Next Article in Journal
Energy Efficiency Maximization for Hybrid-Powered 5G Networks with Energy Cooperation
Next Article in Special Issue
Identification of Secondary Breast Cancer in Vital Organs through the Integration of Machine Learning and Microarrays
Previous Article in Journal
Fault Diagnosis Method of Smart Meters Based on DBN-CapsNet
Previous Article in Special Issue
Deep Learning for Depression Detection from Textual Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review on Deep Learning Techniques for IoT Data

1
Vellore Institute of Technology (VIT), Vellore 632014, India
2
Department of Computer Science, College of Computer and Information Sciences, Majmaah University, Al-Majmaah 11952, Saudi Arabia
3
School of Manufacturing Engineering, Suranaree University of Technology, Nakhon Ratchasima 30000, Thailand
4
Department of Computer Engineering, College of Computer and Information Sciences, Majmaah University, Al-Majmaah 11952, Saudi Arabia
*
Authors to whom correspondence should be addressed.
Electronics 2022, 11(10), 1604; https://doi.org/10.3390/electronics11101604
Submission received: 7 April 2022 / Revised: 1 May 2022 / Accepted: 6 May 2022 / Published: 18 May 2022

Abstract

:
Continuous growth in software, hardware and internet technology has enabled the growth of internet-based sensor tools that provide physical world observations and data measurement. The Internet of Things(IoT) is made up of billions of smart things that communicate, extending the boundaries of physical and virtual entities of the world further. These intelligent things produce or collect massive data daily with a broad range of applications and fields. Analytics on these huge data is a critical tool for discovering new knowledge, foreseeing future knowledge and making control decisions that make IoT a worthy business paradigm and enhancing technology. Deep learning has been used in a variety of projects involving IoT and mobile apps, with encouraging early results. With its data-driven, anomaly-based methodology and capacity to detect developing, unexpected attacks, deep learning may deliver cutting-edge solutions for IoT intrusion detection. In this paper, the increased amount of information gathered or produced is being used to further develop intelligence and application capabilities through Deep Learning (DL) techniques. Many researchers have been attracted to the various fields of IoT, and both DL and IoT techniques have been approached. Different studies suggested DL as a feasible solution to manage data produced by IoT because it was intended to handle a variety of data in large amounts, requiring almost real-time processing. We start by discussing the introduction to IoT, data generation and data processing. We also discuss the various DL approaches with their procedures. We surveyed and summarized major reporting efforts for DL in the IoT region on various datasets. The features, application and challenges that DL uses to empower IoT applications, which are also discussed in this promising field, can motivate and inspire further developments.

1. Introduction

Applications based on smartphones, sensors and actuators are becoming more and more intelligent over the past decade and facilitate communication between devices and the performance of more complex tasks. The number of network devices exceeded the world population [1] in 2008 and the figure continues to increase exponentially until today. In the age of the Internet of Things (IoT), smartphones, built-in systems, wireless sensors and most every device are connected by a local network or the internet. The growth in Internet-of-Things (IoT), which includes smartphones [2], sensor networks [3], sensors unusual aerial vehicles (UAV) [4,5], cognitively smart systems [6], and so on has created a multitude of new applications across various mobile and remote platforms. The amount of data obtained from such devices often increases with the growing number of devices. New technologies are emerging that evaluate data gathered for practical connections and decision making, progressing to Artificial Intelligence (AI) using Machine Learning(ML) and Deep learning (DL) algorithms.
We generally adopt a work flow model, which consists of collecting data, analysis of data, visualizing data and evaluating data [7,8], in order to build successful IoT applications. Data analysis is a crucial, computer-intensive dimension in which historically developed technologies typically combine technical expertise and ML (e.g., logistical regression, vector support, and the random forest) for classification or regression problems such as traffic conditions forecast [9], tracking vehicles [10], estimating delivery time [11], etc. Furthermore, as society enters the “big data” generation, these traditional methods are not strong enough to process huge, volatile, and irregular data from invisible, heterogeneous, IoT-based databases. Nearly all conventional systems focus on fully enclosed features and their efficiency relies heavily on the previous knowledge of particular areas. Most learning techniques used in those systems usually use shallow architectures, whose modeling and representational capacity are very small. As such, it is essential to have a much more effective analytical tool to exploit the maximum potential of the invaluable raw data produced in various IoT operations.
The annual economic effect of IoT in 2025 will range from $2:7 to $6:2 trillion based on McKinsey’s study on the IoT global economic consequences [12]. Healthcare is the largest share of this sector about 41%, followed by industry and oil with 33% and 7%, respectively, of the IoT sector. Additional fields such as transport, irrigation, public infrastructure, security and retailers account for approximately 15% of the entire IoT sector. Such expectations mean the immense and rapid growth of IoT services, their data generation and, therefore, their corresponding demand in the upcoming years. In McKinsey’s report [12], the economic impact of machine learning is characterized by automated learning; ’the use of computers to carry out tasks that rely on complex assessments, precise evaluations and innovative problem solving’. The study addresses the key proponents of information automation in ML techniques, such as DL and neural networks.
Communication from Machine-To-Machine can be short distance using Wi-Fi, Bluetooth and ZigBee technologies or broad-band mobile networks such as LoRa, M1 CAT, Sigfox, GSM, 4G, LTE and 5G [13].Since IoT devices are used extensively in all sorts of everyday applications, the cost of IoT devices needs to be kept low. Furthermore, IoT devices should be able to handle fundamental tasks such as collection of data, M2M interaction, etc. IoT is also tightly linked to “big data” as IoT devices collect and exchange vast data continuously. In general, therefore, an IoT infrastructure uses methods to manage, store and evaluate massive data [14,15]. In order to facilitate the communication of the M2M with protocols such as AMQP, MQTT, CoAP and HTTP [16], it has become necessary in IoT infrastructure to use IoT-platforms such as Thingsboard, Thingspeak DeviceHive or Mainflux. It is often necessary for certain data processing to occur on IoT devices instead of other centralized nodes in the “cloud computing” network according to the application. As the processing moves partially to the end network elements, a new data processing model, called “edge computing”, is introduced [17]. Furthermore, they may not be ideal for heavy workloads due to these devices often being low-end devices. Therefore, an intermediate node with enough resources is required to manage advanced processing tasks which are physically close to the end network components so that the burden caused by massive transmission of all data to a number of the internal cloud nodes is reduced. Here, “Fog nodes” [18] are introduced to assist big data management on IoT devices through the provision of storage, processing and networking services. Finally, the data is stored in cloud storage, where advanced testing by means of different ML and DL technologies and sharing with other devices leads to the establishment of smart apps with modern value added. DL has this intense publicity because conventional methods of machine learning do not meet the current analytical criteria of IoT systems. Instead, according to the structure of data generation and processing of IoT as shown in Figure 1, IoT systems require specific conventional data analytic methods, and AI methods. Dl methods used to analyze the big data in IoT cloud and streaming and fast data analysis in the edge or fog computing and data from IoT devices.
While an IoT has been conducted in recent years, the entire field of deep learning in IoT applications remains in infancy. Few researchers [19,20,21,22,23,24], reviewed papers in wireless sensor network(WSN) with ML, implementation of DL methods for healthcare department, mai DL approaches and applicability in IoT applications focusing on Big and streaming data analysis and DL Algorithms with its applications to make smart development, respectively. After taking the survey on existing papers, there is still no survey that explores a wide range of IoT devices thoroughly using DL. We also agree that it is time to review and inspire future study recommendations in current literature. To this end, this paper summarizes current research developments and patterns in using DL techniques to promote IoT applications. We will demonstrate how using DL to enhance IoT applications can be implemented from various perspectives. For example, monitoring of safety, analysis of diseases, indoor locations, artificial control, predicting the traffic, residential robots, drive automation, fault assessment and inspection of manufacturing. The issues, challenges and possible research directions for DL in IoT applications are also discussed to encourage future developments in this promising area and empower them.
The rest of this paper is organized as follows: Section 2 includes a variety of popular and common DNN architectures. This also offers a concise overview of advances and fast DL architectures along with state-of-the-art DL algorithms. Section 3 will review IoT applications and challenges in various domains (e.g., education, manufacturing, smart city, healthcare, and Intelligent Transportation Systems(ITS), agriculture) using DL. This article concludes in Section 4.

2. Deep Learning Techniques

Stakeholders must clearly grasp the meaning, building blocks, potentials and challenges of the IoT and its derivatives big data. IoT and Big Data connect in two ways; an IoT is one of the leading producers and a major target for Big Data research in order to enhance IoT processes and services [25]. In addition, IoT Big Data Research has shown that it gives value to society. The IoT data is distinct from the overall big data. We need to explore the properties of IoT data [26] and how they vary from the traditional big data to analyze the requirements of IoT data analytics.
Here, discussing the advantages of DL over conventional methods of ML, which highlight the benefits of DL in IoT applications [27,28]. DL has more powerful capacity to generalize the dynamic relation of vast raw data in different IoT applications when compared with normal ML methods. The ability to process data is generally dependent upon the depths and the different architectures of learning models, including convolutional architectures; thus, in big data, DL models can most likely perform better while common learning models can easily be over-touched when dealing with a flood of data. Deep learning is an end-to-end process that is capable of learning how to derive successful features from raw data, without taking time and labor-intensive hand-made applications. In recent years, DL models have been more conscious than the other conventional ML approaches. Figure 2 shows the search pattern in Google trends, in which DL is increasingly popular amongst other ML algorithms such as random forest, k-means, SVM and decision tree. Moreover, as per the Google trend, Figure 3 shows that the CNN method became more popular out of all DL methods.
Deep learning is a recently developed multilayer neural network learning algorithm. It has revolutionized the concept of machine learning, propelling artificial intelligence and human–computer interaction forward in leaps and bounds. They have performed the evaluation test for CNN and DBN on the MNIST database and the real-world handwritten character database which gave 99.28% and 98.12% accuracy [29]. Despite its complex structure and the diversity of registered user data, the researchers [30] work here assumes the MIA in a semi-white box scenario where system model structures and parameters are available but no user data information is available, and verifies it as a serious threat even for a deep-learning-based face recognition system. The impact of power plants on GEP over their lifetime is studied in this article [31]. Deep learning-based techniques are also commonly employed for time series forecasting.
DL contains powerful methods for boosting knowledge which allow a large number of unstructured information to be processed [32]. These techniques are ideal for managing big data and for computer intensive processes such as image pattern recognition, voice acknowledgment and analysis, etc. DL needs strong computing skills and is known to take up time in the model training cycle, which has been one of the biggest challenges in the past. Efficient GPUs are widely used to carry out DL tasks with increased requirements for CPU capacity. Thus, in the period of big data, DL has become a popular form of data processing and modeling [27]. In the DL methodology, the number of layers is tight with determined characteristics. In DL, the functionality is automatically estimated and feature calculation and extraction are not required before such a method is applied. In addition, the progress of DL introduces a wide range of network structures. The goal of the authors [33,34] project is to quantify EEG features in order to better understand task-induced neurological impairments caused by stroke and to assess biomarkers to distinguish between ischemic stroke patients and healthy adults.
In the two phases of training and forecasting, DL models typically offer two major improvements over conventional ML approaches [21]. Initially, they minimize the need for human training and then remove those features that may not be obvious for the human view [35]. DL methods also improve the accuracy. DL, like traditional ML, can be partitioned into two cases: non-supervised learning (unlabeled data models) and supervised learning (labeled data models).

2.1. Supervised Learning

The system model for supervised learning is built into a labeled training set. The backpropagation method is the primary approach used in supervised learning [36].

2.1.1. Recurrent Neural Networks (RNNs)

The RNN is a discriminative categorical method which can process the serial and time series data mainly. In several tasks, the estimation relies on many previous tests in order to evaluate the sequences of inputs, besides the classification of individual tests. A neural feeder network does not apply in such applications because it does not rely on input and output layers. The RNN data contains the present sample as well as the previous observed sample as the input. The output at stage m can be affected by the output at stage m-1. Every neuron has a feedback loop that produces the output to the next step as an input. This step says that each neuron has an internal memory to store the previous level data estimates in RNN. We can not use the original back propagation here, despite the presence of neuronal cycles, because it works depending on loss derivation in relation to the weight in the previous layer even though we do not have a stacked layer model in RNNs. The center of Backpropagation Through Time(BPTT) [37] is a technique called “unrolling the RNN”, so that we build a feeder network over time. Figure 4 depicts the structure of an RNN and unrolled concept.
However, due to the diffusion of gradient problems and longer term dependency, RNNs are constrained by looking back only a few steps. New methods are proposed, such as GRU (Gated Recurrent Unit) [38], LSTM (Long Short-Term Memory) [39], designing the hidden state simulation to decide what to hold in past and present memory. In order to tackle problems sequentially such as text or speech and time-series data problems in different lengths, RNNs were developed. The RNNs can be applied to processes include detecting drivers’ actions in intelligent cars, defining the patterns of movements of individual persons and estimating household consumption. Consequently, RNN is mainly used in the field of natural language processing (NLP) [40,41,42]. Table 1 shows the where RNN used in IoT fields.

2.1.2. Long Short Term Memory (LSTM)

The LSTM is discriminative method which can work on time-stamp, sequential and long time-dependent data. Figure 5 shows the model of LSTM. LSTMs are a form of RNN that can learn order-dependence in series estimation. LSTM uses its unit gate definition, each based on its input calculating a value from 0 to 1. Every neuron having four gates to maintain the data such as feedback loop, multiplying forget, read and write gates. These gates control the accessing memory cells and preventing distractions by unrelated inputs. The neuron writes its data into itself when the gate forget is working, otherwise it forgets its last data by sending a 0. The other neurons that are inter connected will write to it and read the content of it, when write and read gate is fixed to 1. By knowing what LSTM data are to be recalled, the stored memory cell computations will not be corrupted over time. The common way to reduce the error by training network is BPTT. LSTM models are stronger than RNN models if the data is characterized by a long time dependence [38].
Generally, LSTM is the expanding model of RNN. Various LSTM methods are proposed based on the original network [39,56]. LSTMs and standard RNNs were implemented for predicting sequence and sequence labeling tasks successfully. These models performed better than RNNs on context-sensitive(CS) and context free(CF) languages [57]. For connected models with small sizes, LSTMs converge rapidly and provide state-of-the-art machine translation, voice recognition efficiently [58]. LSTM networks are not well developed for larger networks on one single multi-core computer. Table 2 shows where LSTM is used in IoT fields.

2.1.3. Convolutional Neural Networks (CNN’s)

The CNN is a discriminatory method which can used more for identifying images and differentiating one from the another. The CNN is made up of an input, an output and some hidden layers. The sub sample layers, pooling layer, convolutional layers, pooling, fully connected (FC) or non-linear layers are hidden layers in the architecture of CNNs. The CNN is the main version of the FCs. All neurons are connected from one layer to the each and every neuron in the another layer. So, FCs makes them overwrite the data. Figure 6 depicts the structure of a CNN.
DNNs with a dense relation between the layers are difficult to train and do not test well for vision-based tasks due to the characterization of the translation-invariance [77]. These problem can be solved by the CNN with the support of the above said property. A CNN will obtain a 2D input and will extract high quality characters from a range of hidden layers (e.g., image or speech). The convolution layer is the heart of a CNN which contains filters with the same input shape but small size. In order to streamline the underling computing, complex networks can involve global or local pooling layers which decrease the data dimensions by integrating neuron cluster outputs into one neuron in one layer in the next. Usually, a RELU layer is the activation function [78,79], accompanied by additional convolutions such as pooling layers, FC layers and hidden layers since the activating feature and the final convolution cover their inputs and outputs. Table 3 shows where the CNN is used in IoT fields.

2.1.4. Transformer-Based Deep Neural Networks

In the deep learning context, the transformer denotes a sequence-to-sequence architecture of neural networks that depend on the self-attention process to capture global dependencies [112]. It attracted the thoughtfulness of several researchers from the field of natural language processing (NLP) due to the transformer being designed to take sequence data as input. One of the most successful transformer-based models which achieved state-of-the-art performance in many NLP tasks is Bidirectional Encoder Representations from Transformers (BERT) [113]. Recently, the transformer is also charming progressively standard in the computer vision community. Image classification with the transformer that takes patches of images as input was proposed by Dosovitskiy et al. [114]. One of the successful project works carried out for an end-to-end object detection framework based on the transformer named it as a detection transformer (DETR) [115]. DETR simplifies the object detection pipeline by dropping multiple hand-designed components that encode prior knowledge, such as spatial anchors and non-maximal suppression. So, the transformer-based deep neural network is also a promising mechanism to handle the artificial intelligence tasks such as NLP and computer vision related areas.

2.2. Unsupervised Learning

Unsupervised learning must be used as a complement to traditional learning methods to deal with massive unlabeled data. Training can be performed using stacked restricted Boltzmann machines (RBMs) or stacked auto-encoders to initialize, replicate back and modify globally.

2.2.1. Autoencoder (AE)

The AE is a generative method which can be suitable for extracting the features and reducing the size with same number of input and output units. These input and output layers are connected with one or more hidden layers. A neuronal network configured to copy its input to its output is an auto-encoder [116]. The layer of the code is private (hidden) to show the input. The layer is made up of two major parts: an encryption encoder which maps the code input and a decoder which maps the code in order to decrypt the original input. The auto-encoder is equipped by reducing input-output errors. The AEs are used mainly for diagnostics and fault identification due to their action in creating the input at the output layer. It will shows many application in IoT. Sparse auto-encoders [117], denoising auto-encoders [118] and contractive auto-encoders are included in the AE variants. Figure 7 shows a brief architecture of an auto-encoder and a concrete example. Table 4 shows where the AE is used in IoT fields.

2.2.2. Restricted Boltzmann Machines (RBMs)

The RBM is a generative method which can work on various kinds of data and its suitable for classifying data, reducing the dimensionality, extracting features, etc. RBMs [128] are probabilistic graphic models which can be viewed as deep stochastic networks. A Boltzmann version of the RBMs is a constraint on the fact that their neurons can form a bipartite chart; there can be a symmetrical relation between a pair of nodes in both visible and hidden groups. However, there is no connection among the nodes in the same group. In addition, all visible and secret (hidden) neurons are linked to the bias device. It may be necessary to stock RBMs to make DNNs. They are also the backbone of the networks of DBNs. DBNs can be built in particular by stacking RBMs and alternatively fine-tuning the associated deep gradient descent and backpropagation networks. The goal of RBM training is to optimize the product for the visible units in all probabilities. RBM has a similar feature to AEs, which is used to measure the latent parameters, which are used in turn to reconstruct the data input with the backward stream. Figure 8 shows the structure of an RBM. Table 5 shows the where RBM used in IoT fields.

2.2.3. Deep Belief Networks (DBNs)

The DBN is a generative method which can work on various types of data. DBNs can be seen as a combination of basic, unsupervised networks (e.g., RBMs and AEs), where a hidden layer of each sub-network is used as a visible layer for the next one. Such a network has connections between the layers, but not inside the layer. DBNs can also be trained greedily layer by layer. This composition leads to a rapid and unregulated training process, which is carried out by the “lowest” layers, where contrasting divergence is applied in turn for each sub-network. The DBN training is performed layer by layer to view each layer as an RBM trained above the previously trained layer. Hence, DBN can be fast and efficient in DL methods. The first is intended for learning about data processing with unlabeled data, and the second attempts to achieve an optimum solution through the harmonization’s of DBN with marked data [135]. DBN combines unsupervised training and supervised methods to create model designs. Figure 9 shows the structure of a DBN model. Table 6 shows the where the DBN is used in IoT fields.

3. IoT Applications and Challenges

The data analysis leads significantly to IoT as discussed in the previous section. In this section, we first review the IoT data features and its applications. Then, we review several issues (challenges) important for the implementation and development of IoT analysis from the point of view of DL.

3.1. Data Features of IoT

As data is the basis for the extraction of knowledge, high-quality information is important. The IoT has many features and is a complex program. The features vary from domain to domain. Here, some features are discussed.
Connectivity allows the Internet of Things to bring ordinary objects together. Such objects are important for their communication since simple interactions of the object level lead to IoT collective intelligence. With this connection, the networking of smart devices and applications will build new business opportunities for the IoT. The key operation of the IoT is the gathering of data from the world, accomplished through the complex changes across the devices. Dynamically, the state of such devices varies, such as sleep and wake up, connection and/or disconnection and the contexts of devices such as temperature, position and speed and also number of machines can change with time, place and person.
IoT is nothing without sensors which can detect or quantify any changes in the environment in order to produce data that can report on their status or even communicate with the environment. IoT sensors and machine learning techniques combined have taken a major role in health informative systems such as discovering heart failure, lung infections, brain movement and many more [142,143]. The sensing data give a rich view to the dynamic world, even though it is basically equivalent to the input from the physical world. Sensors can be used in many applications including our daily life activities. For example, an Automatic Aircraft Control System is made up of multiple sensors that are used for a variety of activities such as speed control, height monitoring, position tracking, door status, avoid obstacles, fuel level, navigating, and more. A computer analyzes the data from all of these sensors by comparing it to predetermined values. IoT is smart because of its group of smart computing methods, software and hardware. Despite smart technology’s widespread popularity, IoT intelligence is only concerned with the interaction between devices while traditional input methods and visual user interface ensure the user and system interaction. Securing endpoints, networks and data that are passed over all of this means developing a security framework is critical.
Several papers defined the overall characteristics of big data in terms of volume, speed and variety from different aspects [144,145,146]. However, to characterize the IoT big data by the following 6V features:
  • Volume: In IoT, a billion devices will generate the huge data.
  • Velocity: How the IoT data can be accessed quickly and efficiently in real time?
  • Variety: Basically, IoT data is text, video, audio, sensor data, etc. It may be structured or unstructured data.
  • Veracity: Refers to the accuracy, consistency and trust of data, which leads to precise analytics in effect.
  • Variability: Basically, Data flow rate depends on IoT applications, generating data components, time and space.
  • Value: To transform IoT big data into useful information and insights that offer many advantages to organizations.

3.2. Deep Learning Using IoT Devices

The availability of the latest IoT frameworks and their open source libraries for continuous monitoring, real-time edge-level processing and encrypted storage of generated data such as text, tabular data, audio and video leads to enormous increasing rate of IoT datasets [147]. Such data are produced by diverse hardware systems working in outdoor and indoor ground-works which includes smart city sensors, smart organization fields, AR/VR practice centers, etc. In order to do the training of such large scale high-quality IoT datasets that have been collected over a period of time within a reasonable amount of time, we need a distributed training system which is scalable and efficiently utilizes the hardware resources of millions of IoT devices. Specifically, such a system environment should consider the current network connectivity among these devices, and be able to work together during training to generate the final deep learning (DL) models at very high speeds for real-time problem-solving [148].
The authors in [149] proposed distributed training on multiple IoT devices instead of following the traditional approach that loads such large scale datasets to train and build a model locally within a data center or GPU cluster. In this method, instead of using a GPU cluster available within a data center, the DL model trains and builds on the hardware of millions of medium-sized IoT devices across the context of the infrastructure. They addressed model convergence of the subsequently generated model and scalability of the system. The key issues during involvement of the all IoT devices for training are privacy to the data, time consuming dataset loading I/O, the slow exchange of model gradients while training, and high computational operations. These are the some of the challenges yet to address elaborately in order to train and build a DL model using global infrastructure.

3.3. Applications of IoT

The IoT application is classified according to its basic attributes and characteristics. Some problems should be taken into consideration for the effective operation of IoT data analysis. Some of the IoT applications shown in Figure 10. The IoT applications may be categorized in the following ways:
  • Smart Home: Probably, the first application of the IoT is smart home. As per the IoT analytics, more than 70,000 people are searching about the ‘smart home’ every month. Many big companies funding the IoT startup for smart home projects. The smart home appliances include washing machines, refrigerators, bulbs, fans, televisions, smart doors which can built and communicate online each other with approves users to provide better monitoring and managing the appliances and also optimizing the energy consumption.
  • Smart City: The hypothesis of the optimized traffic system I mentioned earlier, is one of the many aspects that constitute a smart city. This category is most specific to the cities. Mostly, the problems are common in all cities. However, sometimes, they may vary city to city. Global problems are also emerging in numerous cities, including safe drinking water, declining air quality and rising urban density. The IoT applications in city areas are water management, waste management, security, climate monitor, traffic management, etc. We can reduce the noise, pollution, accidents, parking problems, street light problems and public transport because of the smart transportation in cities.
  • Health care: Relevant real-world knowledge is missing in the tools of modern medical science. It uses the remaining data, managed environments and medical examination volunteers mainly. By research, real-time field data, and testing, IoT opens the door to a sea of useful data. To improve the health of a patient, new technologies have been developed using the IoT in the medical field [150,151]. The sensors can monitor a wound’s state, blood pressure, heart rate, sugar and oxygen levels, body temperature, etc., without the presence of the doctors and medical practitioners. In the article [152], physiological signals are instantaneous and sensitive to neurological changes caused by the cognitive load imposed by diverse driving conditions, and are used to assess the relationship between neurological results and driving environments.
  • Security: IoT can improve security everywhere in the world using smart cameras. Smart security systems can identify criminals or avoid dangerous situations by means of real-time digital image recognition. Security is the biggest challenge in the IoT field.
  • Smart Retail: It is one of the biggest applications in the IoT field. Solutions for tracking goods while they are on the road, or getting suppliers to exchange inventory information have been on the market for years. However, it is also limited. The use of intelligent GPS and RFID technologies makes it easy to track the product between the output and the store and greatly reduce costs and time. The applications of IoT in retail are tracking location, inventory management, equipment maintenance, analyzing mall traffic, etc.
  • Agriculture: Many researchers have already worked in this emerging application of IoT [153,154]. Through the growing use of the IoT, connected devices have penetrated everything from health and well-being to home automation, car and logistics to intelligent cities, security, retail and industrial IoT. However, since farming operations are remote and there are many resources that the IoT can monitored, the way farmers operate can be completely changed. Here, the major problem is to change farmers to smart farming. They can be benefited in many ways such as checking soil quality, weather conditions, cost management, reducing wastage, managing crop etc.
  • Wearables: Now a days, we can see wearables with anyone which can monitor heart rate, sugar and oxygen levels, blood pressure, temperature, sleeping status, walk distance, etc. Wearable technology is an excellent aspect in IoT applications and is undoubtedly one of the first industries to use IoT.
  • Industrial Automation: Remote access and control are enabled with industrial IoT networking, but more significantly data extraction, processing, sharing and analysis by various data sources. This has tremendous productivity and performance improvement potential. Their low cost and rapid development characterize the IIoT solutions. In order to achieve a better result in cost and customer service, IoT Applications can also re-engineer devices and their packages with IoT automation easily. Some applications are product flow monitoring, digitization, quality control, safety and security, package optimization, logistics and supply chain optimization.

3.4. Challenges

Data sources are a foundation for the success of DL methods. To apply DL to IoT is having a problem with the lack of big datasets, to make DL more accurate, we need more data. Another difficulty in IoT applications is to generate raw data in a suitable form to be fed into DL models. Many DL methods need preprocessing data to get more accurate results. For IoT applications, preprocessing is more complex since the system deals with data from different sources that may have various formats and distributions while showing missing data. The way data collection systems are applied is really a vital research topic. The number of sensors working and the way the sensors are deployed have an influence on data quality. Even if the model architecture is well built, you must build a data collection module for the entire IoT system layout. It should be more reliable, cost effective and trustworthy model.
Security is the biggest challenge in the IoT filed as we collect data from many sources. In many IoT applications, maintaining data protection and confidentiality is a major concern as IoT large data is distributed for review through the Internet, making this accessible worldwide. In several applications, anonymization can be used, but these methods can be exploited and re-identified as anonymized data. DL models are learning the characteristics of raw data and thus can benefit from any invalid data stream. Here, DL models must be updated using certain methods for finding irregular or invalid data.
For IoT system designers, designing DL is a great challenge to meet the needs of managing DNNs on resource-restricted devices. This is expected to increase as the dimensions of the datasets expand daily and new algorithms are included in DL’s IoT solutions. DL also has many limitations. The authors [155] published on DDN’s false faith in human-recognizable images. The other drawback is that DL models concentrate on classification, while other IoT applications have a sort of regression in their analysis core. Few researchers attempted to introduce regression capabilities to DNNs, for example, in [156] proposing the DBN and Support Vector Regression (SVR) ensemble.
Off-road vehicles’ digital monitoring is hampered by their sophisticated and pricey IoT sensor technologies. In remote off-network locations, the high reliance on cloud/fog compute, network availability, and expert knowledge make it a handicap. The answer that has yet to be commercialized is the use of edge devices, such as smartphones, with computation capability. The researchers offer a hybridized computational intelligence technique for developing an edge-device-enabled AI system for off-road vehicle health monitoring and diagnosis (HM&D) using super-cheap microphones as sensors [157]. The authors have shown their results as the taxonomy clearly demonstrates how an EC method may be utilized to improve and optimize DL. Furthermore, this survey addresses potential research avenues that could lead to the development of EDL in the future [158].
Deep learning is a strong tool to process IoT big data and thus requires high-level hardware requirements. The design of a DL model of an embedded system with limitations on resources remains a challenge. We may get network failure, data disclosure while collecting and transferring data to the servers and analyzed. A movement to develop a cloud-based learning framework that includes leading devices and the cloud is emerging. A cloud-based device will use the edge to reduce delay, maximize safety and protection and use smart techniques for data retention [159]. It can also use the cloud to exchange data around cutting edges and to train high-quality computational models [160].

4. Conclusions

In this paper, a review has been presented on the DL and IoT techniques exploited in various domains such as smart home, smart city, smart transport, energy, localization, health sector, security, agriculture, etc. In recent years, DL and IoT have attracted the attention of researchers and business units, both of which have shown their positive impact on our life, cities, and the earth. Many IoT applications are obviously supported by DL resources. DL models are effective methods for solving large-scale problems with data analysis. We addressed the issue of training and building the DL model using large scale datasets which are being produced at ever increasing rates due to the availability of the latest IoT frameworks and open source libraries to collect the same. The literature suggested that using distributed IoT devices themselves for training a model is better than centralized cluster-like infrastructure. However, distributed approach needs to address the challenges such as data privacy, time consuming for IO operations and high complex operations. We reviewed the latest research on how supervised (RNN, LSTM, Transformer-based deep neural networks and CNN) and unsupervised (AE, RBM and DBN) can create a profound DL model for IoT applications. Deep learning ensures that attempts to construct specific characteristics are futile. In addition, the major advances in various domains have been made with IoT and DL, while a further development is expected in the next few years. Moreover, how to design a highly accurate and resource-efficient architecture remains a challenge, and exploration of this area has not yet ended.

Author Contributions

K.L., D.S.R., Z.S.A. and A.A.: Application of statistical, mathematical, computational, or other formal techniques to analyze or synthesize study data. R.K.: Management and coordination responsibility for the research activity planning and execution. N.G., M.A.H. and A.A.K.: Preparation, creation and/or presentation of the published work, specifically visualization/data presentation. Z.S.A.: Preparation, creation and/or presentation of the published work. A.A. and Z.S.A. contributed to data visualization of the revised manuscript. M.A.H. and A.A.K. addition of new review materials. Z.S.A., A.A.K., M.A.H. and A.A. performed the data curation and English editing. M.A.H. and A.A. artwork in the revised version. Z.S.A. and A.A.K. M.A.H., A.A.K., A.A. and Z.S.A. Writing—review & editing and validation. A.A., Z.S.A., M.A.H. and A.A.K. acquisition of the financial support for the project leading to this publication. All authors have read and agreed to the published version of the manuscript.

Funding

Ahmed Alhussen would like to thank Deanship of Scientific Research at Majmaah University for supporting this work under Project No. R-2022-145.

Acknowledgments

Ahmed Alhussen would like to acknowledge Deanship of Scientific Research at Majmaah University for supporting this work under Project No. R-2022-145.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Swan, M. Sensor mania! the internet of things, wearable computing, objective metrics, and the quantified self 2.0. J. Sens. Actuator Netw. 2012, 1, 217–253. [Google Scholar] [CrossRef] [Green Version]
  2. Cai, C.; Hu, M.; Cao, D.; Ma, X.; Li, Q.; Liu, J. Self-deployable indoor localization with acoustic-enabled IoT devices exploiting participatory sensing. IEEE Internet Things J. 2019, 6, 5297–5311. [Google Scholar] [CrossRef]
  3. Wang, C.; Lin, H.; Jiang, H. CANS: Towards congestion-adaptive and small stretch emergency navigation with wireless sensor networks. IEEE Trans. Mob. Comput. 2015, 15, 1077–1089. [Google Scholar] [CrossRef]
  4. Hu, M.; Liu, W.; Peng, K.; Ma, X.; Cheng, W.; Liu, J.; Li, B. Joint routing and scheduling for vehicle-assisted multidrone surveillance. IEEE Internet Things J. 2018, 6, 1781–1790. [Google Scholar] [CrossRef]
  5. Hu, M.; Liu, W.; Lu, J.; Fu, R.; Peng, K.; Ma, X.; Liu, J. On the joint design of routing and scheduling for vehicle-assisted multi-UAV inspection. Future Gener. Comput. Syst. 2019, 94, 214–223. [Google Scholar] [CrossRef]
  6. Chen, M.; Herrera, F.; Hwang, K. Cognitive computing: Architecture, technologies and intelligent applications. IEEE Access 2018, 6, 19774–19783. [Google Scholar] [CrossRef]
  7. Huang, T.; Lan, L.; Fang, X.; An, P.; Min, J.; Wang, F. Promises and challenges of big data computing in health sciences. Big Data Res. 2015, 2, 2–11. [Google Scholar] [CrossRef]
  8. RM, S.P.; Bhattacharya, S.; Maddikunta, P.K.R.; Somayaji, S.R.K.; Lakshmanna, K.; Kaluri, R.; Hussien, A.; Gadekallu, T.R. Load balancing of energy cloud using wind driven and firefly algorithms in internet of everything. J. Parallel Distrib. Comput. 2020, 142, 16–26. [Google Scholar]
  9. Castro-Neto, M.; Jeong, Y.S.; Jeong, M.K.; Han, L.D. Online-SVR for short-term traffic flow prediction under typical and atypical traffic conditions. Expert Syst. Appl. 2009, 36, 6164–6173. [Google Scholar] [CrossRef]
  10. Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; Snoussi, H. Target tracking using machine learning and Kalman filter in wireless sensor networks. IEEE Sens. J. 2014, 14, 3715–3725. [Google Scholar] [CrossRef] [Green Version]
  11. Wang, F.; Zhu, Y.; Wang, F.; Liu, J.; Ma, X.; Fan, X. Car4Pac: Last mile parcel delivery through intelligent car trip sharing. IEEE Trans. Intell. Transp. Syst. 2019, 21, 4410–4424. [Google Scholar] [CrossRef]
  12. Manyika, J.; Chui, M.; Bughin, J.; Dobbs, R.; Bisson, P.; Marrs, A. Disruptive Technologies: Advances that will Transform Life, Business, and the Global Economy; McKinsey Global Institute: San Francisco, CA, USA, 2013; Volume 180. [Google Scholar]
  13. Vangelista, L.; Zanella, A.; Zorzi, M. Long-range IoT technologies: The dawn of LoRaTM. In Future Access Enablers of Ubiquitous and Intelligent Infrastructures; Springer: Cham, Switzerland, 2015; pp. 51–58. [Google Scholar]
  14. Hashem, I.A.T.; Chang, V.; Anuar, N.B.; Adewole, K.; Yaqoob, I.; Gani, A.; Ahmed, E.; Chiroma, H. The role of big data in smart city. Int. J. Inf. Manag. 2016, 36, 748–758. [Google Scholar] [CrossRef] [Green Version]
  15. Iwendi, C.; Maddikunta, P.K.R.; Gadekallu, T.R.; Lakshmanna, K.; Bashir, A.K.; Piran, M.J. A metaheuristic optimization approach for energy efficiency in the IoT networks. Softw. Pract. Exp. 2020, 51, 2558–2571. [Google Scholar] [CrossRef]
  16. Naik, N. Choice of effective messaging protocols for IoT systems: MQTT, CoAP, AMQP and HTTP. In Proceedings of the 2017 IEEE International Systems Engineering Symposium (ISSE), Vienna, Austria, 11–13 October 2017; pp. 1–7. [Google Scholar]
  17. Satyanarayanan, M. The emergence of edge computing. Computer 2017, 50, 30–39. [Google Scholar] [CrossRef]
  18. Luan, T.H.; Gao, L.; Li, Z.; Xiang, Y.; Wei, G.; Sun, L. Fog computing: Focusing on mobile users at the edge. arXiv 2015, arXiv:1502.01815. [Google Scholar]
  19. Alsheikh, M.A.; Lin, S.; Niyato, D.; Tan, H.P. Machine learning in wireless sensor networks: Algorithms, strategies, and applications. IEEE Commun. Surv. Tuts 2014, 16, 1996–2018. [Google Scholar] [CrossRef] [Green Version]
  20. Miotto, R.; Wang, F.; Wang, S.; Jiang, X.; Dudley, J.T. Deep learning for healthcare: Review, opportunities and challenges. Briefings Bioinform. 2018, 19, 1236–1246. [Google Scholar] [CrossRef]
  21. Mohammadi, M.; Al-Fuqaha, A.; Sorour, S.; Guizani, M. Deep learning for IoT big data and streaming analytics: A survey. IEEE Commun. Surv. Tutorials 2018, 20, 2923–2960. [Google Scholar] [CrossRef] [Green Version]
  22. Wang, J.; Ma, Y.; Zhang, L.; Gao, R.X.; Wu, D. Deep learning for smart manufacturing: Methods and applications. J. Manuf. Syst. 2018, 48, 144–156. [Google Scholar] [CrossRef]
  23. Reddy, G.T.; Reddy, M.P.K.; Lakshmanna, K.; Kaluri, R.; Rajput, D.S.; Srivastava, G.; Baker, T. Analysis of Dimensionality Reduction Techniques on Big Data. IEEE Access 2020, 8, 54776–54788. [Google Scholar] [CrossRef]
  24. Lakshmanna, K.; Khare, N. Mining DNA Sequence Patterns with Constraints Using Hybridization of Firefly and Group Search Optimization. J. Intell. Syst. 2018, 27, 349–362. [Google Scholar] [CrossRef]
  25. Mohammadi, M.; Al-Fuqaha, A. Enabling cognitive smart cities using big data and machine learning: Approaches and challenges. IEEE Commun. Mag. 2018, 56, 94–101. [Google Scholar] [CrossRef] [Green Version]
  26. Chen, M.; Mao, S.; Zhang, Y.; Leung, V.C. Big Data: Related Technologies, Challenges and Future Prospects; Springer: Heidelberg, Germany, 2014. [Google Scholar]
  27. Ma, X.; Yao, T.; Hu, M.; Dong, Y.; Liu, W.; Wang, F.; Liu, J. A Survey on Deep Learning Empowered IoT Applications. IEEE Access 2019, 7, 181721–181732. [Google Scholar] [CrossRef]
  28. Rodrigues, A.P.; Fernandes, R.; Shetty, A.; Lakshmanna, K.; Shafi, R.M. Real-Time Twitter Spam Detection and Sentiment Analysis using Machine Learning and Deep Learning Techniques. Comput. Intell. Neurosci. 2022, 2022. [Google Scholar] [CrossRef]
  29. Wu, M.; Chen, L. Image recognition based on deep learning. In Proceedings of the 2015 Chinese Automation Congress (CAC), Wuhan, China, November27–29 2015; pp. 542–546. [Google Scholar]
  30. Khosravy, M.; Nakamura, K.; Hirose, Y.; Nitta, N.; Babaguchi, N. Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation from a Face Recognition System. IEEE Trans. Inf. Forensics Secur. 2022. [Google Scholar] [CrossRef]
  31. Dehghani, M.; Taghipour, M.; Sadeghi Gougheri, S.; Nikoofard, A.; Gharehpetian, G.B.; Khosravy, M. A Deep Learning-Based Approach for Generation Expansion Planning Considering Power Plants Lifetime. Energies 2021, 14, 8035. [Google Scholar] [CrossRef]
  32. Zantalis, F.; Koulouras, G.; Karabetsos, S.; Kandris, D. A review of machine learning and IoT in smart transportation. Future Internet 2019, 11, 94. [Google Scholar] [CrossRef] [Green Version]
  33. Hussain, I.; Park, S.J. Quantitative evaluation of task-induced neurological outcome after stroke. Brain Sci. 2021, 11, 900. [Google Scholar] [CrossRef]
  34. Hussain, I.; Hossain, M.A.; Jany, R.; Bari, M.A.; Uddin, M.; Kamal, A.R.M.; Ku, Y.; Kim, J.S. Quantitative Evaluation of EEG-Biomarkers for Prediction of Sleep Stages. Sensors 2022, 22, 3079. [Google Scholar] [CrossRef]
  35. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  36. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  37. Werbos, P.J. Backpropagation through time: What it does and how to do it. Proc. IEEE 1990, 78, 1550–1560. [Google Scholar] [CrossRef] [Green Version]
  38. Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
  39. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  40. Yu, A.W.; Lee, H.; Le, Q.V. Learning to skim text. arXiv 2017, arXiv:1704.06877. [Google Scholar]
  41. Lai, S.; Xu, L.; Liu, K.; Zhao, J. Recurrent convolutional neural networks for text classification. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, Austin, TX, USA, 25–30 January 2015. [Google Scholar]
  42. Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
  43. Song, X.; Kanasugi, H.; Shibasaki, R. DeepTransport: Prediction and Simulation of Human Mobility and Transportation Mode at a Citywide Level. In Proceedings of the IJCAI, New York, NY, USA, 9–15 July 2016; Volume 16, pp. 2618–2624. [Google Scholar]
  44. Liang, V.C.; Ma, R.T.; Ng, W.S.; Wang, L.; Winslett, M.; Wu, H.; Ying, S.; Zhang, Z. Mercury: Metro density prediction with recurrent neural network on streaming CDR data. In Proceedings of the 2016 IEEE 32nd International Conference on Data Engineering (ICDE), Helsinki, Finland, 16–20 May 2016; pp. 1374–1377. [Google Scholar]
  45. HaddadPajouh, H.; Dehghantanha, A.; Khayami, R.; Choo, K.K.R. A deep recurrent neural network based approach for internet of things malware threat hunting. Future Gener. Comput. Syst. 2018, 85, 88–96. [Google Scholar] [CrossRef]
  46. Roy, B.; Cheung, H. A deep learning approach for intrusion detection in internet of things using bi-directional long short-term memory recurrent neural network. In Proceedings of the 2018 28th International Telecommunication Networks and Applications Conference (ITNAC), Sydney, Australia, 21–23 November 2018; pp. 1–6. [Google Scholar]
  47. Mocanu, E.; Nguyen, P.H.; Gibescu, M.; Kling, W.L. Deep learning for estimating building energy consumption. Sustain. Energy Grids Netw. 2016, 6, 91–99. [Google Scholar] [CrossRef]
  48. Wang, K.C.; Zemel, R. Classifying NBA offensive plays using neural networks. In Proceedings of the MIT Sloan Sports Analytics Conference, Toronto, ON, Canada, 11–12 March 2016; Volume 4. [Google Scholar]
  49. Shah, R.; Romijnders, R. Applying deep learning to basketball trajectories. arXiv 2016, arXiv:1608.03793. [Google Scholar]
  50. Yang, T.Y.; Brinton, C.G.; Joe-Wong, C.; Chiang, M. Behavior-based grade prediction for MOOCs via time series neural networks. IEEE J. Sel. Top. Signal Process. 2017, 11, 716–728. [Google Scholar] [CrossRef]
  51. Piech, C.; Bassen, J.; Huang, J.; Ganguli, S.; Sahami, M.; Guibas, L.J.; Sohl-Dickstein, J. Deep knowledge tracing. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; pp. 505–513. [Google Scholar]
  52. Steinberg, L. Changing the game: The rise of sports analytics. Forbes. Retrieved March 2015, 14, 2017. [Google Scholar]
  53. Singh, B.; Marks, T.K.; Jones, M.; Tuzel, O.; Shao, M. A multi-stream bi-directional recurrent neural network for fine-grained action detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1961–1970. [Google Scholar]
  54. Pigou, L.; Van Den Oord, A.; Dieleman, S.; Van Herreweghe, M.; Dambre, J. Beyond temporal pooling: Recurrence and temporal convolutions for gesture recognition in video. Int. J. Comput. Vis. 2018, 126, 430–439. [Google Scholar] [CrossRef] [Green Version]
  55. Neverova, N.; Wolf, C.; Lacey, G.; Fridman, L.; Chandra, D.; Barbello, B.; Taylor, G. Learning human identity from motion patterns. IEEE Access 2016, 4, 1810–1820. [Google Scholar] [CrossRef]
  56. Asghar, M.Z.; Lajis, A.; Alam, M.M.; Rahmat, M.K.; Nasir, H.M.; Ahmad, H.; Al-Rakhami, M.S.; Al-Amri, A.; Albogamy, F.R. A Deep Neural Network Model for the Detection and Classification of Emotions from Textual Content. Complexity 2022, 2022. [Google Scholar] [CrossRef]
  57. Gers, F.A.; Schmidhuber, E. LSTM recurrent networks learn simple context-free and context-sensitive languages. IEEE Trans. Neural Netw. 2001, 12, 1333–1340. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  58. Sak, H.; Senior, A.; Beaufays, F. Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. arXiv 2014, arXiv:1402.1128. [Google Scholar]
  59. Lv, Y.; Duan, Y.; Kang, W.; Li, Z.; Wang, F.Y. Traffic flow prediction with big data: A deep learning approach. IEEE Trans. Intell. Transp. Syst. 2014, 16, 865–873. [Google Scholar] [CrossRef]
  60. Xu, H.; Gao, Y.; Yu, F.; Darrell, T. End-to-end learning of driving models from large-scale video datasets. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2174–2182. [Google Scholar]
  61. Ordóñez, F.J.; Roggen, D. Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition. Sensors 2016, 16, 115. [Google Scholar] [CrossRef] [Green Version]
  62. Tao, D.; Wen, Y.; Hong, R. Multicolumn bidirectional long short-term memory for mobile devices-based human activity recognition. IEEE Internet Things J. 2016, 3, 1124–1134. [Google Scholar] [CrossRef]
  63. Dataset, O. OPPORTUNITY+ Activity+ Recognition. 2012. Available online: https://archive.ics.uci.edu/ml/datasets (accessed on 19 November 2015).
  64. Lu, W.; Zhang, J.; Zhao, X.; Wang, J.; Dang, J. Multimodal sensory fusion for soccer robot self-localization based on long short-term memory recurrent neural network. J. Ambient Intell. Humaniz. Comput. 2017, 8, 885–893. [Google Scholar] [CrossRef]
  65. Manic, M.; Amarasinghe, K.; Rodriguez-Andina, J.J.; Rieger, C. Intelligent buildings of the future: Cyberaware, deep learning powered, and human interacting. IEEE Ind. Electron. Mag. 2016, 10, 32–49. [Google Scholar] [CrossRef]
  66. Gensler, A.; Henze, J.; Sick, B.; Raabe, N. Deep Learning for solar power forecasting—An approach using AutoEncoder and LSTM Neural Networks. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 002858–002865. [Google Scholar]
  67. Hada-Muranushi, Y.; Muranushi, T.; Asai, A.; Okanohara, D.; Raymond, R.; Watanabe, G.; Nemoto, S.; Shibata, K. A deep-learning approach for operation of an automated realtime flare forecast. arXiv 2016, arXiv:1606.01587. [Google Scholar]
  68. Reddy, T.; RM, S.P.; Parimala, M.; Chowdhary, C.L.; Hakak, S.; Khan, W.Z. A deep neural networks based model for uninterrupted marine environment monitoring. Comput. Commun. 2020, 157, 64–75. [Google Scholar] [CrossRef]
  69. Lipton, Z.C.; Kale, D.C.; Elkan, C.; Wetzel, R. Learning to diagnose with LSTM recurrent neural networks. arXiv 2015, arXiv:1511.03677. [Google Scholar]
  70. Hammerla, N.Y.; Halloran, S.; Plötz, T. Deep, convolutional, and recurrent models for human activity recognition using wearables. arXiv 2016, arXiv:1604.08880. [Google Scholar]
  71. Gao, Y.; Xiang, X.; Xiong, N.; Huang, B.; Lee, H.J.; Alrifai, R.; Jiang, X.; Fang, Z. Human action monitoring for healthcare based on deep learning. IEEE Access 2018, 6, 52277–52285. [Google Scholar] [CrossRef]
  72. Chavarriaga, R.; Sagha, H.; Calatroni, A.; Digumarti, S.T.; Tröster, G.; Millán, J.d.R.; Roggen, D. The Opportunity challenge: A benchmark database for on-body sensor-based activity recognition. Pattern Recognit. Lett. 2013, 34, 2033–2042. [Google Scholar] [CrossRef] [Green Version]
  73. Reddy, G.T.; Reddy, M.P.K.; Lakshmanna, K.; Rajput, D.S.; Kaluri, R.; Srivastava, G. Hybrid genetic algorithm and a fuzzy logic classifier for heart disease diagnosis. Evol. Intell. 2020, 13(2), 185–196. [Google Scholar] [CrossRef]
  74. Reiss, A.; Stricker, D. Introducing a new benchmarked dataset for activity monitoring. In Proceedings of the 2012 16th International Symposium on Wearable Computers, Newcastle, UK, 18–22 June 2012; pp. 108–109. [Google Scholar]
  75. Bächlin, M.; Roggen, D.; Tröster, G.; Plotnik, M.; Inbar, N.; Maidan, I.; Herman, T.; Brozgol, M.; Shaviv, E.; Giladi, N.; et al. Potentials of Enhanced Context Awareness in Wearable Assistants for Parkinson’s Disease Patients with the Freezing of Gait Syndrome. In Proceedings of the ISWC, Linz, Austria, 4–7 September 2009; pp. 123–130. [Google Scholar]
  76. Ibrahim, M.S.; Muralidharan, S.; Deng, Z.; Vahdat, A.; Mori, G. A hierarchical deep temporal model for group activity recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1971–1980. [Google Scholar]
  77. Asghar, M.Z.; Albogamy, F.R.; Al-Rakhami, M.S.; Asghar, J.; Rahmat, M.K.; Alam, M.M.; Lajis, A.; Nasir, H.M. Facial Mask Detection Using Depthwise Separable Convolutional Neural Network Model During COVID-19 Pandemic. Front. Public Health 2022, 10, 855254. [Google Scholar] [CrossRef]
  78. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: Red Hook, NY, USA, 2012; pp. 1097–1105. [Google Scholar]
  79. Alghazzawi, D.; Bamasag, O.; Albeshri, A.; Sana, I.; Ullah, H.; Asghar, M.Z. Efficient Prediction of Court Judgments Using an LSTM+ CNN Neural Network Model with an Optimal Feature Set. Mathematics 2022, 10, 683. [Google Scholar] [CrossRef]
  80. Zhu, J.; Pande, A.; Mohapatra, P.; Han, J.J. Using deep learning for energy expenditure estimation with wearable sensors. In Proceedings of the 2015 17th International Conference on E-health Networking, Application & Services (HealthCom), Boston, MA, USA, 14–17 October 2015; pp. 501–506. [Google Scholar]
  81. Hannun, A.Y.; Rajpurkar, P.; Haghpanahi, M.; Tison, G.H.; Bourn, C.; Turakhia, M.P.; Ng, A.Y. Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network. Nat. Med. 2019, 25, 65. [Google Scholar] [CrossRef] [PubMed]
  82. Prasoon, A.; Petersen, K.; Igel, C.; Lauze, F.; Dam, E.; Nielsen, M. Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2013; pp. 246–253. [Google Scholar]
  83. Liu, C.; Cao, Y.; Luo, Y.; Chen, G.; Vokkarane, V.; Ma, Y. Deepfood: Deep learning-based food image recognition for computer-aided dietary assessment. In International Conference on Smart Homes and Health Telematics; Springer: Cham, Switzerland, 2016; pp. 37–48. [Google Scholar]
  84. Pereira, C.R.; Pereira, D.R.; Papa, J.P.; Rosa, G.H.; Yang, X.S. Convolutional neural networks applied for parkinson’s disease identification. In Machine Learning for Health Informatics; Springer: Cham, Switzerland, 2016; pp. 377–390. [Google Scholar]
  85. Erol, B.A.; Majumdar, A.; Lwowski, J.; Benavidez, P.; Rad, P.; Jamshidi, M. Improved deep neural network object tracking system for applications in home robotics. In Computational Intelligence for Pattern Recognition; Springer: Cham, Switzerland, 2018; pp. 369–395. [Google Scholar]
  86. Levine, S.; Pastor, P.; Krizhevsky, A.; Ibarz, J.; Quillen, D. Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res. 2018, 37, 421–436. [Google Scholar] [CrossRef]
  87. Amato, G.; Carrara, F.; Falchi, F.; Gennaro, C.; Meghini, C.; Vairo, C. Deep learning for decentralized parking lot occupancy detection. Expert Syst. Appl. 2017, 72, 327–334. [Google Scholar] [CrossRef]
  88. Valipour, S.; Siam, M.; Stroulia, E.; Jagersand, M. Parking-stall vacancy indicator system, based on deep convolutional neural networks. In Proceedings of the 2016 IEEE 3rd World Forum on Internet of Things (WF-IoT), Reston, VA, USA, 12–14 December 2016; pp. 655–660. [Google Scholar]
  89. Zhang, J.; Zheng, Y.; Qi, D. Deep spatio-temporal residual networks for citywide crowd flows prediction. In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017. [Google Scholar]
  90. Li, H.; Li, Y.; Porikli, F. Deeptrack: Learning discriminative feature representations online for robust visual tracking. IEEE Trans. Image Process. 2015, 25, 1834–1848. [Google Scholar] [CrossRef] [Green Version]
  91. Wu, B.; Iandola, F.; Jin, P.H.; Keutzer, K. Squeezedet: Unified, small, low power fully convolutional neural networks for real-time object detection for autonomous driving. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 129–137. [Google Scholar]
  92. Bojarski, M.; Del Testa, D.; Dworakowski, D.; Firner, B.; Flepp, B.; Goyal, P.; Jackel, L.D.; Monfort, M.; Muller, U.; Zhang, J.; et al. End to end learning for self-driving cars. arXiv 2016, arXiv:1604.07316. [Google Scholar]
  93. Shin, M.; Paik, W.; Kim, B.; Hwang, S. An IoT platform with monitoring robot applying CNN-based context-aware learning. Sensors 2019, 19, 2525. [Google Scholar] [CrossRef] [Green Version]
  94. Mittal, G.; Yagnik, K.B.; Garg, M.; Krishnan, N.C. Spotgarbage: Smartphone app to detect garbage using deep learning. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Heidelberg, Germany, 12–16 September 2016; pp. 940–945. [Google Scholar]
  95. Liu, C.; Cao, Y.; Luo, Y.; Chen, G.; Vokkarane, V.; Yunsheng, M.; Chen, S.; Hou, P. A new deep learning-based food recognition system for dietary assessment on an edge computing service infrastructure. IEEE Trans. Serv. Comput. 2017, 11, 249–261. [Google Scholar] [CrossRef]
  96. Sladojevic, S.; Arsenovic, M.; Anderla, A.; Culibrk, D.; Stefanovic, D. Deep neural networks based recognition of plant diseases by leaf image classification. Comput. Intell. Neurosci. 2016, 2016. [Google Scholar] [CrossRef] [Green Version]
  97. CireşAn, D.; Meier, U.; Masci, J.; Schmidhuber, J. Multi-column deep neural network for traffic sign classification. Neural Netw. 2012, 32, 333–338. [Google Scholar] [CrossRef] [Green Version]
  98. Lim, K.; Hong, Y.; Choi, Y.; Byun, H. Real-time traffic sign recognition based on a general purpose GPU and deep-learning. PLoS ONE 2017, 12, e0173317. [Google Scholar] [CrossRef] [Green Version]
  99. Wang, J.; Ding, H.; Bidgoli, F.A.; Zhou, B.; Iribarren, C.; Molloi, S.; Baldi, P. Detecting cardiovascular disease from mammograms with deep learning. IEEE Trans. Med. Imaging 2017, 36, 1172–1181. [Google Scholar] [CrossRef] [PubMed]
  100. Liu, W.; Liu, J.; Gu, X.; Liu, K.; Dai, X.; Ma, H. Deep learning based intelligent basketball arena with energy image. In International Conference on Multimedia Modeling; Springer: Cham, Switzerland, 2017; pp. 601–613. [Google Scholar]
  101. Toshev, A.; Szegedy, C. Deeppose: Human pose estimation via deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1653–1660. [Google Scholar]
  102. Kussul, N.; Lavreniuk, M.; Skakun, S.; Shelestov, A. Deep learning classification of land cover and crop types using remote sensing data. IEEE Geosci. Remote Sens. Lett. 2017, 14, 778–782. [Google Scholar] [CrossRef]
  103. Steen, K.A.; Christiansen, P.; Karstoft, H.; Jørgensen, R.N. Using deep learning to challenge safety standard for highly autonomous machines in agriculture. J. Imaging 2016, 2, 6. [Google Scholar] [CrossRef] [Green Version]
  104. Kautz, T.; Groh, B.H.; Hannink, J.; Jensen, U.; Strubberg, H.; Eskofier, B.M. Activity recognition in beach volleyball using a Deep Convolutional Neural Network. Data Min. Knowl. Discov. 2017, 31, 1678–1705. [Google Scholar] [CrossRef]
  105. Bell, S.; Bala, K. Learning visual similarity for product design with convolutional neural networks. ACM Trans. Graph. (TOG) 2015, 34, 1–10. [Google Scholar] [CrossRef]
  106. Xiao, L.; Yichao, X. Exact clothing retrieval approach based on deep neural network. In Proceedings of the 2016 IEEE Information Technology, Networking, Electronic and Automation Control Conference, Chongqing, China, 20–22 May 2016; pp. 396–400. [Google Scholar]
  107. Advani, S.; Zientara, P.; Shukla, N.; Okafor, I.; Irick, K.; Sampson, J.; Datta, S.; Narayanan, V. A multitask grocery assist system for the visually impaired: Smart glasses, gloves, and shopping carts provide auditory and tactile feedback. IEEE Consum. Electron. Mag. 2016, 6, 73–81. [Google Scholar] [CrossRef]
  108. Liu, Z.; Zhang, L.; Liu, Q.; Yin, Y.; Cheng, L.; Zimmermann, R. Fusion of magnetic and visual sensors for indoor localization: Infrastructure-free and more effective. IEEE Trans. Multimed. 2016, 19, 874–888. [Google Scholar] [CrossRef]
  109. Becker, M. Indoor positioning solely based on user’s sight. In International Conference on Information Science and Applications; Springer: Singapore, 2017; pp. 76–83. [Google Scholar]
  110. Njima, W.; Ahriz, I.; Zayani, R.; Terre, M.; Bouallegue, R. Deep CNN for Indoor Localization in IoT-Sensor Systems. Sensors 2019, 19, 3127. [Google Scholar] [CrossRef] [Green Version]
  111. Liu, Y.; Racah, E.; Correa, J.; Khosrowshahi, A.; Lavers, D.; Kunkel, K.; Wehner, M.; Collins, W. Application of deep convolutional neural networks for detecting extreme weather in climate datasets. arXiv 2016, arXiv:1605.01156. [Google Scholar]
  112. Hu, R.; Chen, J.; Zhou, L. A transformer-based deep neural network for arrhythmia detection using continuous ECG signals. Comput. Biol. Med. 2022, 144, 105325. [Google Scholar] [CrossRef]
  113. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
  114. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  115. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; Zagoruyko, S. End-to-end object detection with transformers. In Proceedings of the European Conference on Computer Vision; Springer: Cham, Switzerland, 2020; pp. 213–229. [Google Scholar]
  116. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  117. Lee, H.; Battle, A.; Raina, R.; Ng, A.Y. Efficient sparse coding algorithms. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2007; pp. 801–808. [Google Scholar]
  118. Vincent, P.; Larochelle, H.; Bengio, Y.; Manzagol, P.A. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th International Conference on Machine Learning, Helsinki, Finland, 5–9 July 2008; pp. 1096–1103. [Google Scholar]
  119. Shao, H.; Jiang, H.; Wang, F.; Zhao, H. An enhancement deep feature fusion method for rotating machinery fault diagnosis. Knowl.-Based Syst. 2017, 119, 200–220. [Google Scholar] [CrossRef]
  120. Lee, H.; Kim, Y.; Kim, C.O. A deep learning model for robust wafer fault monitoring with sensor measurement noise. IEEE Trans. Semicond. Manuf. 2016, 30, 23–31. [Google Scholar] [CrossRef]
  121. Liu, Y.; Wu, L. Geological disaster recognition on optical remote sensing images using deep learning. Procedia Comput. Sci. 2016, 91, 566–575. [Google Scholar] [CrossRef] [Green Version]
  122. Fragkiadaki, K.; Levine, S.; Felsen, P.; Malik, J. Recurrent network models for human dynamics. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 4346–4354. [Google Scholar]
  123. Ionescu, C.; Papava, D.; Olaru, V.; Sminchisescu, C. Human3. 6m: Large scale datasets and predictive methods for 3d human sensing in natural environments. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 36, 1325–1339. [Google Scholar] [CrossRef]
  124. Gu, Y.; Chen, Y.; Liu, J.; Jiang, X. Semi-supervised deep extreme learning machine for Wi-Fi based localization. Neurocomputing 2015, 166, 282–293. [Google Scholar] [CrossRef]
  125. Zhang, W.; Liu, K.; Zhang, W.; Zhang, Y.; Gu, J. Deep neural networks for wireless localization in indoor and outdoor environments. Neurocomputing 2016, 194, 279–287. [Google Scholar] [CrossRef]
  126. Shone, N.; Ngoc, T.N.; Phai, V.D.; Shi, Q. A deep learning approach to network intrusion detection. IEEE Trans. Emerg. Top. Comput. Intell. 2018, 2, 41–50. [Google Scholar] [CrossRef] [Green Version]
  127. Lopez-Martin, M.; Carro, B.; Sanchez-Esguevillas, A.; Lloret, J. Conditional variational autoencoder for prediction and feature recovery applied to intrusion detection in iot. Sensors 2017, 17, 1967. [Google Scholar] [CrossRef] [Green Version]
  128. Fischer, A.; Igel, C. An introduction to restricted Boltzmann machines. In Iberoamerican Congress on Pattern Recognition; Springer: Berlin/Heidelberg, Germany, 2012; pp. 14–36. [Google Scholar]
  129. Mocanu, D.C.; Mocanu, E.; Nguyen, P.H.; Gibescu, M.; Liotta, A. Big IoT data mining for real-time energy disaggregation in buildings. In Proceedings of the 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Budapest, Hungary, 9–12 October 2016; pp. 003765–003769. [Google Scholar]
  130. Kolter, J.Z.; Johnson, M.J. REDD: A public data set for energy disaggregation research. In Proceedings of the Workshop on Data Mining Applications in Sustainability (SIGKDD), San Diego, CA, USA, 21 August 2011; Volume 25, pp. 59–62. [Google Scholar]
  131. Wang, X.; Gao, L.; Mao, S.; Pandey, S. DeepFi: Deep learning for indoor fingerprinting using channel state information. In Proceedings of the 2015 IEEE Wireless Communications and Networking Conference (WCNC), New Orleans, LA, USA, 9–12 March 2015; pp. 1666–1671. [Google Scholar]
  132. Wang, X.; Gao, L.; Mao, S.; Pandey, S. CSI-based fingerprinting for indoor localization: A deep learning approach. IEEE Trans. Veh. Technol. 2016, 66, 763–776. [Google Scholar] [CrossRef] [Green Version]
  133. Wang, J.; Zhang, X.; Gao, Q.; Yue, H.; Wang, H. Device-free wireless localization and activity recognition: A deep learning approach. IEEE Trans. Veh. Technol. 2016, 66, 6258–6267. [Google Scholar] [CrossRef]
  134. Ma, X.; Yu, H.; Wang, Y.; Wang, Y. Large-scale transportation network congestion evolution prediction using deep learning theory. PLoS ONE 2015, 10, e0119044. [Google Scholar] [CrossRef] [PubMed]
  135. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef] [Green Version]
  136. Huang, W.; Song, G.; Hong, H.; Xie, K. Deep architecture for traffic flow prediction: Deep belief networks with multitask learning. IEEE Trans. Intell. Transp. Syst. 2014, 15, 2191–2201. [Google Scholar] [CrossRef]
  137. Chang, C.Y.; Bhattacharya, S.; Raj Vincent, P.; Lakshmanna, K.; Srinivasan, K. An Efficient Classification of Neonates Cry Using Extreme Gradient Boosting-Assisted Grouped-Support-Vector Network. J. Healthc. Eng. 2021, 2021. [Google Scholar] [CrossRef]
  138. Kang, M.J.; Kang, J.W. Intrusion detection system using deep neural network for in-vehicle network security. PLoS ONE 2016, 11, e0155781. [Google Scholar] [CrossRef]
  139. Kahou, S.E.; Bouthillier, X.; Lamblin, P.; Gulcehre, C.; Michalski, V.; Konda, K.; Jean, S.; Froumenty, P.; Dauphin, Y.; Boulanger-Lewandowski, N.; et al. Emonets: Multimodal deep learning approaches for emotion recognition in video. J. Multimodal User Interfaces 2016, 10, 99–111. [Google Scholar] [CrossRef] [Green Version]
  140. He, Y.; Mendis, G.J.; Wei, J. Real-time detection of false data injection attacks in smart grid: A deep learning-based intelligent mechanism. IEEE Trans. Smart Grid 2017, 8, 2505–2516. [Google Scholar] [CrossRef]
  141. Yuan, Z.; Lu, Y.; Wang, Z.; Xue, Y. Droid-sec: Deep learning in android malware detection. In Proceedings of the 2014 ACM conference on SIGCOMM, Chicago, IL, USA, 17–22 August 2014; pp. 371–372. [Google Scholar]
  142. Hussain, I.; Park, S.J. Big-Ecg: Cardiographic Predictive Cyber-Physical System for Stroke Management. IEEE Access 2021, 9, 123146–123164. [Google Scholar] [CrossRef]
  143. Hussain, I.; Park, S.J. HealthSOS: Real-Time Health Monitoring System for Stroke Prognostics. IEEE Access 2020, 8, 213574–213586. [Google Scholar] [CrossRef]
  144. Hilbert, M. Big data for development: A review of promises and challenges. Dev. Policy Rev. 2016, 34, 135–174. [Google Scholar] [CrossRef] [Green Version]
  145. Fan, W.; Bifet, A. Mining big data: Current status, and forecast to the future. ACM SIGKDD Explor. Newsl. 2013, 14, 1–5. [Google Scholar] [CrossRef]
  146. Hu, H.; Wen, Y.; Chua, T.S.; Li, X. Toward scalable systems for big data analytics: A technology tutorial. IEEE Access 2014, 2, 652–687. [Google Scholar] [CrossRef]
  147. Mahdavinejad, M.S.; Rezvan, M.; Barekatain, M.; Adibi, P.; Barnaghi, P.; Sheth, A.P. Machine learning for Internet of Things data analysis: A survey. Digit. Commun. Netw. 2018, 4, 161–175. [Google Scholar] [CrossRef]
  148. Saleem, T.J.; Chishti, M.A. Deep learning for Internet of Things data analytics. Procedia Comput. Sci. 2019, 163, 381–390. [Google Scholar] [CrossRef]
  149. Sudharsan, B.; Patel, P.; Breslin, J.; Ali, M.I.; Mitra, K.; Dustdar, S.; Rana, O.; Jayaraman, P.P.; Ranjan, R. Toward distributed, global, deep learning using iot devices. IEEE Internet Comput. 2021, 25, 6–12. [Google Scholar] [CrossRef]
  150. Lakshmanna, K.; Khare, N. Constraint-based measures for DNA sequence mining using group search optimization algorithm. Int. J. Intell. Eng. Syst. 2016, 9, 91–100. [Google Scholar] [CrossRef]
  151. Lakshmanna, K.; Khare, N. FDSMO: Frequent DNA sequence mining using FBSB and optimization. Int. J. Intell. Eng. Syst. 2016, 9, 157–166. [Google Scholar] [CrossRef]
  152. Hussain, I.; Young, S.; Park, S.J. Driving-induced neurological biomarkers in an advanced driver-assistance system. Sensors 2021, 21, 6985. [Google Scholar] [CrossRef]
  153. Gupta, N.; Khosravy, M.; Patel, N.; Dey, N.; Gupta, S.; Darbari, H.; Crespo, R.G. Economic data analytic AI technique on IoT edge devices for health monitoring of agriculture machines. Appl. Intell. 2020, 50, 3990–4016. [Google Scholar] [CrossRef]
  154. Garg, D.; Khan, S.; Alam, M. Integrative use of IoT and deep learning for agricultural applications. In Proceedings of ICETIT 2019; Springer: Cham, Switzerland, 2020; pp. 521–531. [Google Scholar]
  155. Nguyen, A.; Yosinski, J.; Clune, J. Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 427–436. [Google Scholar]
  156. Qiu, X.; Zhang, L.; Ren, Y.; Suganthan, P.N.; Amaratunga, G. Ensemble deep learning for regression and time series forecasting. In Proceedings of the 2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL), Orlando, FL, USA, 9–12 December 2014; pp. 1–6. [Google Scholar]
  157. Gupta, N.; Khosravy, M.; Patel, N.; Dey, N.; Crespo, R.G. Lightweight Computational Intelligence for IoT Health Monitoring of Off-Road Vehicles: Enhanced Selection Log-Scaled Mutation GA Structured ANN. IEEE Trans. Ind. Informatics 2021, 18, 611–619. [Google Scholar] [CrossRef]
  158. Zhan, Z.H.; Li, J.Y.; Zhang, J. Evolutionary deep learning: A survey. Neurocomputing 2022, 483, 42–58. [Google Scholar] [CrossRef]
  159. Zhao, P.; Li, J.; Zeng, F.; Xiao, F.; Wang, C.; Jiang, H. ILLIA: Enabling k-Anonymity-Based Privacy Preserving Against Location Injection Attacks in Continuous LBS Queries. IEEE Internet Things J. 2018, 5, 1033–1042. [Google Scholar] [CrossRef]
  160. Stoica, I.; Song, D.; Popa, R.A.; Patterson, D.; Mahoney, M.W.; Katz, R.; Joseph, A.D.; Jordan, M.; Hellerstein, J.M.; Gonzalez, J.E.; et al. A berkeley view of systems challenges for ai. arXiv 2017, arXiv:1712.05855. [Google Scholar]
Figure 1. Data generation and processing of IoT.
Figure 1. Data generation and processing of IoT.
Electronics 11 01604 g001
Figure 2. Google trend shows more interest on DL in recent years.
Figure 2. Google trend shows more interest on DL in recent years.
Electronics 11 01604 g002
Figure 3. Google trend shows more interest on CNN in recent years.
Figure 3. Google trend shows more interest on CNN in recent years.
Electronics 11 01604 g003
Figure 4. The Structure of Recurrent Neural Network Model.
Figure 4. The Structure of Recurrent Neural Network Model.
Electronics 11 01604 g004
Figure 5. The Structure of Long Short Term Memory Model.
Figure 5. The Structure of Long Short Term Memory Model.
Electronics 11 01604 g005
Figure 6. The Structure of Convolutional Neural Network model.
Figure 6. The Structure of Convolutional Neural Network model.
Electronics 11 01604 g006
Figure 7. Architecturee of Auto-Encoder Model.
Figure 7. Architecturee of Auto-Encoder Model.
Electronics 11 01604 g007
Figure 8. The Structure of Restricted Boltzmann Machine Model.
Figure 8. The Structure of Restricted Boltzmann Machine Model.
Electronics 11 01604 g008
Figure 9. The Structure of Deep Belief Networks Model.
Figure 9. The Structure of Deep Belief Networks Model.
Electronics 11 01604 g009
Figure 10. IoT Applications.
Figure 10. IoT Applications.
Electronics 11 01604 g010
Table 1. RNN model in IoT fields.
Table 1. RNN model in IoT fields.
Applications in IoTReferenceDataset Used
Prediction of Transport or Group density[43,44]data of the Telecommunication department/CDR
Smart city[44,45,46]data of the Telecommunication department/CDR, Climate data, IDS data
Energy[47]Electric power consumption http://archive.ics.uci.edu/ml (accessed on 5 May 2022).
Recognising images[48,49]SportVU dataset
Education[50,51]MOOC dataset
Sport and Retail[52,53]Sports data and MPII Cooking dataset, MERL Shopping Dataset
Detection in physiology and psychology[54,55]Montalbano gesture recognition dataset, Google Abacus Dataset
Table 2. LSTM model in IoT fields.
Table 2. LSTM model in IoT fields.
Applications in IoTReferenceDataset Used
Prediction of Transport or Group density[44]data of the Telecommunication department/CDR
Small period traffic prediction[59]Caltrans Performance Measurement System (PeMS) database
Autonomous driving[60]Large scale video dataset
Detection in physiology and psychology[61,62]OPPORTUNITY [63], Skoda Dataset, NMHA Dataset
Localization[64]locations and environment data
Smart home and city[43,65]electrical consumption data, GPS data in Japan
Energy[66,67,68]GermanSolarFarm data set, Forecast dataset, Beach dataset
Health-care[69,70,71]Opportunity dataset [72,73], PAMAP2 dataset [74], Daphnet Gait dataset (DG) [75], Diagnoses clinical data
Education[51]MOOC dataset
Sport[49,76]NBA SportVu data, the Collective Activity Dataset and volleyball dataset.
Table 3. CNN model in IoT fields.
Table 3. CNN model in IoT fields.
Applications in IoTReferenceDataset Used
Healthcare[70,71,80,81,82,83,84]Opportunity dataset [72], PAMAP2 dataset [74], Daphnet Gait dataset (DG) [75], human action recognition dataset, Cardiology dataset, Knee Cartilage dataset, Food Image dataset, Parkinson’s Disease data
Smart home and city[65,85,86,87,88]Home Robotcs data, Brainrobotdata, Electric consumption data, CNRPark-EXT dataset, PKLot datasets
Transportation[89,90,91,92,93]Traffic data, KITTI Object Detecion dataset, Driving dataset
Recognizing images[76,84,87,88,94,95,96,97,98,99,100]PKLot datasets, CNRPark-EXT dataset, Garbage In Images (GINI) dataset, UEC-256/UEC-100 Dataset, Leaf Image dataset, German Traffic data, LISA US traffic sign dataset, parkinson’s disease dataset, full-field digital mammograms (FFDMs)
Detection of physiology and psychology[54,61,101]Frames Labeled In Cinem, Leeds Sports Dataset, OPPORTUNITY, Skoda and Actitracker datasets, Gesture Data
Agriculture[96,102,103]Leaf Image data, U.S. Geological Survey (USGS), Agriculture data
Sport and Retail[76,100,104,105,106,107]Basket ball data, Vollyball data Group activity data, Real-world internet data, clothing image dataset, INRIA dataset
Localization[108,109,110]Fingerprint data, GPS data
Government[111]Climate Dataset
Table 4. AE Model in IoT fields.
Table 4. AE Model in IoT fields.
Applications in IoTReferenceDataset Used
Fault Assessment[119,120]Diagnosis dataset, multivariate signal datasets
Image Recognition[121]optical remote sensing images from Google Earth
Detection in physiology and psychology[122]H3.6M dataset [123]
Energy[66]GermanSolarFarm data set
Localization[124,125]HTC Sensation data, Fingerprint dataset
Public Sector[121]optical remote sensing images from Google Earth
IoT Infrastructure[126,127]IDS dataset
Table 5. RBM model in IoT fields.
Table 5. RBM model in IoT fields.
Applications in IoTReferenceDataset Used
Energy[47,129]Reference Energy Disaggregation Dataset (REDD) [130], Energy Consumption data
Localization[131,132,133]Fingerprint dataset, received signal strength (RSS) data
Health Sector[69]e International Classification of Diseases (ICD-9) codes
Intelligent Transportation System[134]Traffic dataset
Table 6. DBN model in IoT fields.
Table 6. DBN model in IoT fields.
Applications in IoTReferenceDataset Used
Transport[120,136]PeMS data set, Multivariat Time Series Dataset
Energy[66]GermanSolarFarm data set
Health Sector[69,137]International Classification of Diseases (ICD-9) codes
Intelligent Transportation System[138]IDS dataset
Image Recognition[120]Multivariat Time Series Dataset
Detection of physiology and psychology[139]AFEW4 dataset
Security[140,141]Security dataset, Malicious Dataset
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lakshmanna, K.; Kaluri, R.; Gundluru, N.; Alzamil, Z.S.; Rajput, D.S.; Khan, A.A.; Haq, M.A.; Alhussen, A. A Review on Deep Learning Techniques for IoT Data. Electronics 2022, 11, 1604. https://doi.org/10.3390/electronics11101604

AMA Style

Lakshmanna K, Kaluri R, Gundluru N, Alzamil ZS, Rajput DS, Khan AA, Haq MA, Alhussen A. A Review on Deep Learning Techniques for IoT Data. Electronics. 2022; 11(10):1604. https://doi.org/10.3390/electronics11101604

Chicago/Turabian Style

Lakshmanna, Kuruva, Rajesh Kaluri, Nagaraja Gundluru, Zamil S. Alzamil, Dharmendra Singh Rajput, Arfat Ahmad Khan, Mohd Anul Haq, and Ahmed Alhussen. 2022. "A Review on Deep Learning Techniques for IoT Data" Electronics 11, no. 10: 1604. https://doi.org/10.3390/electronics11101604

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop