Next Article in Journal
Morphometric-hydro Characterization of the Coastal Line between El-Qussier and Marsa-Alam, Egypt: Preliminary Flood Risk Signatures
Next Article in Special Issue
LAN Intrusion Detection Using Convolutional Neural Networks
Previous Article in Journal
Joint Communication–Motion Planning in Networked Robotic Systems
Previous Article in Special Issue
Review of Offline Payment Function of CBDC Considering Security Requirements
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

A Review of Deep Learning Applications for the Next Generation of Cognitive Networks

by
Raymundo Buenrostro-Mariscal
1,
Pedro C. Santana-Mancilla
1,2,
Osval Antonio Montesinos-López
1,
Juan Ivan Nieto Hipólito
3 and
Luis E. Anido-Rifón
2,*
1
School of Telematics, Universidad de Colima, Colima 28040, Mexico
2
atlanTTic Research Center, School of Telecommunications Engineering, University of Vigo, 36310 Vigo, Spain
3
Facultad de Ingeniería, Arquitectura y Diseño, Universidad Autónoma de Baja California, Ensenada 22860, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(12), 6262; https://doi.org/10.3390/app12126262
Submission received: 18 May 2022 / Revised: 8 June 2022 / Accepted: 16 June 2022 / Published: 20 June 2022

Abstract

:
Intelligence capabilities will be the cornerstone in the development of next-generation cognitive networks. These capabilities allow them to observe network conditions, learn from them, and then, using prior knowledge gained, respond to its operating environment to optimize network performance. This study aims to offer an overview of the current state of the art related to the use of deep learning in applications for intelligent cognitive networks that can serve as a reference for future initiatives in this field. For this, a systematic literature review was carried out in three databases, and eligible articles were selected that focused on using deep learning to solve challenges presented by current cognitive networks. As a result, 14 articles were analyzed. The results showed that applying algorithms based on deep learning to optimize cognitive data networks has been approached from different perspectives in recent years and in an experimental way to test its technological feasibility. In addition, its implications for solving fundamental challenges in current wireless networks are discussed.

1. Introduction

Nowadays, with the arrival of smart environments as part of the daily life of many people, it has been evident that it is of the utmost importance that its design needs to consider telecommunications networks. These networks allow the transmission of all the data between various environment components (Figure 1): the Internet of Things (IoT) devices, middleware, and applications. However, without a data network that has the intelligence to dynamically adapt to the conditions presented in complex environments, it will result in less-than-optimal communication and limit functions that need to be real-time. Because of this, there is particular interest in studying novel data networks.
The arrival of these new generations of mobile networks, such as 5G, and the exponential growth of end-users who rapidly demand large data transactions has caused the current networks to become more complex, with the need for faster and more intelligent learning mechanisms [1]. However, most current communications networks are limited by the layered protocol architecture, which causes individual elements to be unaware of the state of the network experienced by other factors. Consequently, the response to the conditions presented in the network has a limited and isolated scope, often resulting in sub-optimal performance [2]. Therefore, this forces us to rethink the design of next-generation networks to transform them into cognitive networks that satisfy these data communication needs. A cognitive network is an intelligent network that should be simple to manage, and its capabilities should be continuously developed and expanded with as little human interaction as feasible [3]. Recently, much effort has been made to improve network connectivity by developing reactive mechanisms to address different operational problems. However, these mechanisms work inefficiently when the network presents significant changes in its operation since they cannot collect data to continue learning and better adapt to these new changes, much less predict future changes. This leads to the use of deep learning (DL) to enable protocols to observe the network conditions and to use prior knowledge gained to efficiently respond to this complex and dynamic operation [4].
For this reason, part of this paper provides a concise discussion of the fundamental and predictive ability of DL methods and the many applications available for the next generation of cognitive networks. It is important to point out that DL is a type of machine learning (ML) algorithm, and ML is a subfield of artificial intelligence (AI) that allows the development of smart devices, products, and systems that mimic many human behaviors and capabilities. ML has become the primary tool for developing AI because ML provides algorithms that can learn from experience, which is powerful when generalizing (deducing new facts from old facts) because ML algorithms assume that the past predicts the future. However, the main difference between machine learning methods and conventional statistical learning methods is that most ML methods are nonparametric models. For this reason, nowadays, AI products being developed with ML have surpassed AI products developed with the symbolic AI (old fashion AI), which was the dominant AI paradigm before the ML paradigm’s arrival. Human knowledge and behavior standards are explicitly included in computer programs using symbolic AI. The foundation of symbolic AI programs is the creation of explicit structures and behavior rules. Symbolic AI is the best option when the rules are explicit because the input can be simply received and translated into symbols. However, the breaking point of symbolic AI was its inability to learn from the past to predict the future.
For this reason, after the 1980s, ML was adopted for AI because it gives computers the ability to learn without being explicitly programmed [5] and enables computers to act and make data-driven decisions to carry out a specific task. However, it is essential to point out that, in essence, the ML domain is a combination of probability, statistics, and computer sciences that allows the development of stochastic algorithms designed so that they can learn and improve over time when exposed to new data. For this reason, ML methods are defined as applying statistical methods to identify patterns in data using computers [6] and as methods that can learn from data and detect hidden patterns in databases to use generated knowledge to predict new outputs of the system [7].
The fundamentals of ML methods are varied, but our focus in this survey is on DL methods and a particular type of ML methods. DL models are different from most ML methods since their functioning is inspired by the functioning of the human brain. The power of the human brain resides in the fact that it is composed of around 1011 neurons. Neurons work in parallel with the memory processing the information captured by the synapses to be distributed over the network [8]. For this reason, an artificial neural network is described as a collection of simple pieces (usually adaptable) that are massively interconnected in parallel and organized hierarchically to interact with real-world objects in the same way that the human nervous system does [9]. DL is defined as a generalization of artificial neural networks (ANN) where more than one hidden layer is used, implying that more neurons are used to implement the model. The adjective “deep” applies not to the acquired knowledge itself but to how the knowledge is acquired [10] since it stands for the idea of successive layers of representations. The “deep” of the model refers to the number of layers that contribute to a model. This means that DL is a type of ML technique that utilizes a stack of multiple processing layers. Each successive layer uses the output from the previous layer as input for learning representations of data with various levels of abstraction. DL is a type of universal learning that may be used to solve supervised, semi-supervised, and unsupervised issues. Training data containing pairs of objects are required for the supervised framework (typically vectors). The input data are one component (predictors). The other is the desired outcome (response variable = output); ML supervised approaches learn a function that translates an input to an output based on input–output pairings. The function’s output can be a continuous value (as in regression issues), a class label (as in binary and multinomial regression), or a count value (as in Poisson regression). This means that ML methods allow the creation of machines for predicting many types of univariate and multivariate outcomes. Unsupervised DL algorithms, on the other hand, only have input (predictors) data (X) and no labeled outputs or response variables (y). Therefore, their goal is to extract the underlying structure or distribution in the data to understand more about it. However, we do not know if our work is accurate since we do not know if the correct answer was complete without supervision. Unlike supervised and unsupervised methods, the semi-supervised methods have few observations with their respective inputs (X) and their respective output label (Y). Still, most of the statements do not have output labels. In this way, these methods try to work with fewer data for training and, therefore, less processing time, seeking to solve two of the main problems of supervised methods and, on the other hand, to increase the low efficiency of non-supervised methods. Although three approaches have been used in many domains, we will focus primarily on applications of supervised methods mainly used for prediction purposes.
The value of DL as a tool for designing AI systems, goods, gadgets, and apps is well-documented. Since technical applications are used in agriculture, banking, medicine, computer vision, and natural language processing, these items can be found everywhere, from social sciences to natural sciences. Some examples are self-driving cars, robots, chatbots, text-to-speech gadgets, devices that automatically translate text and images [11], speech recognition systems, digital assistants such as Google Now and Amazon Alexa, automatic image classification systems; systems to answer natural-language questions [12], to play video games like chess, Jeopardy, GO, and poker [13] or dynamically adjust the difficulty [14]; and systems for adding sound automatically to silent movies, etc.
For these reasons, DL tools are being adopted in many other domains such as health sciences for disease (cancer, dermatology problems, etc.) prediction. For example, in biological sciences, Menden et al. [15] used a DL approach to forecast the survivability of a cancer cell line exposed to a medication in the biological sciences. Alipanahi et al. [16] employed DL in conjunction with a convolutional network architecture to predict DNA and RNA-binding protein specificities. Tavanaei et al. [17] employed a DL technique to predict tumor suppressor genes and oncogenes. Single-cell DNA methylation statuses have also been accurately predicted using DL techniques [18] in the genomic domain for the prediction of breeding values and phenotypes of traits of interest in many cultivars using as input environmental and genomic information [19,20,21,22]. However, the application of DL in the telecommunications field is almost new; however, it is clear that what motivates its study is the need to build more efficient and autonomous network connectivity, that is, with less human intervention.
In this paper, we review the applications of deep learning for next-generation cognitive networks to obtain a meta-picture of its performance and highlight how these tools can help solve the challenging problems of cognitive networks. We also provide the fundamentals of DL, the requirements for its appropriate use, general guidance on how to use the DL method effectively, the pros and cons of this technique, and the trends of DL applications.

2. Materials and Methods

Search Strategy

The present work corresponds to a systematic search in the databases IEEE Xplore, Elsevier, and Springer databases, focused on deep learning in the context of cognitive networks.
The words used in the search were: network, wireless, spectrum, traffic prediction, resource allocation, and deep learning. To ensure the effectiveness of the search, the words “deep learning” were combined with the rest of the terms. In addition, the selection criteria prioritized works published within the last 5 years or less, published in journals (but not limited), indexed in JCR (Journal Citation Reports), with at least one citation, avoiding works that were literature reviews, and clearly indicating which was the DL method used. Table 1 shows the search queries by database.
Because many results involve these search criteria, works dedicated to spectrum use, traffic flow prediction, and resource allocation were chosen. We found a total of 1623 papers, and following the selection criteria, 14 works were selected, of which 10 were journal papers (89%), 3 were conference papers, and 1 was a book (Table 2).
All the journal articles are JCR indexed with an Impact Factor (IF) greater than 2 (min = 2.336, max = 25.249).
The complete search strategy can be seen in Figure 2, which is based on the flow chart of the PRISMA Statement [23].

3. Results

DL has earned a great deal of research attention in the computing field. However, its use in cognitive network systems is relatively recent [24], and to better understand its high complexity, several challenges need to be addressed. Our research objective is to find the potential use of DL methods to enhance the performance of the mechanisms that manage the operation of wireless data networks. In this context, we propose grouping the challenges into three large areas of operation of these networks, which have been constantly addressed in different related works, such as:
  • Wireless spectrum management;
  • Energy utilization efficiency;
  • Enhanced data transmission.
The following subsections summarize these challenges and the deep learning solutions that can address them.

3.1. Wireless Spectrum Management

The explosion of internet access with wireless technologies, such as 3G, 4G, and 5G networks and wireless LANs, has caused the number of devices connected to the internet to grow without measure. As a result, the wireless spectrum is becoming an essential and scarce resource. Moreover, there are so many technologies coexisting together that the problems of interference, channel congestion, data collisions, and unbalanced spectrum usage become a challenge, which drastically reduces overall network performance and user experience [25,26]. In this sense, we present some works discussing solutions to this challenge.
Mitigating channel interference within wireless networks is a significant challenge in improving spectrum usage performance. Channel interference can be reduced by optimizing the configuration of the devices in the use of the wireless channel. The authors in [27] proposed performing an effective wireless channel matching and power allocation configuration. They presented a distributed resource matching scheme based on deep reinforcement learning (DRL) in a device-to-device (D2D) network communication scenario. The scheme is called a “Distributed Multi-user Channel and Communication Power matching algorithm” (DMCP). The DL algorithm of DMCP is formed by a double deep Q network (DDQN), which aims to select the channel and transmission power level autonomously. This combination of learning, especially DRL, allows decision making based on historical information in real-time observations. Another point worth noting is that they propose that the base stations take over the training process, which helps to reduce the data communication delay. The work results show that the DMCP algorithm obtains good results in channel assignment and transmission power selection, which leads to improving the overall throughput and energy efficiency of the network.
Kulin et al. [4] advocated employing deep learning to improve and regulate radio spectrum usage to address various issues related to inefficient spectrum management, utilization, and regulation that the future generation of wireless networks faces. They proposed a spectrum monitoring end-to-end learning framework and defined a generic technique for designing and implementing wireless signal classifiers. As a result, convolutional neural networks (CNN) were used to automatically extract characteristics of non-linear and more abstract wireless signals that are invariant to local spectral and temporal fluctuations and to train wireless signal classifiers. The authors presented two case studies: monitoring the radio spectrum to identify the signals in the communications channel (modulation recognition) and detecting wireless interference technologies.
Shen et al. [28] presented a deep learning-based solution for dealing with the problem of communication network interference. In this scenario, typical machine learning algorithms are ineffective. Because the actual interference image contains many disturbances, the same interference can take on many different shapes, making it impossible to discern the interference shape solely by extracting characteristics from the surrounding area. For spectrum interference image recognition, a deep CNN is utilized to generate an adequate classification of cell interference kinds, considerably boosting the efficiency of interference problem handling. The deep learning recognition procedure includes interference cell detection, interference type identification, and interference source location. The deep learning structure uses the picture disturbance and morphological changes as practical data input in each neuron. Different neurons represent different features of the image. The feature database is optimized by the machine self-learning system through continuous neural network iteration. As a result, the interference signature feature library approaches the real generated phase and effectively recognizes the communication network’s interference. By increasing the efficiency of interference identification, the possibilities for improving network quality and user perception become more numerous.
In [29], they proposed SL-MAC, an intelligent spectrum learning-based medium access control (MAC) protocol for future wireless local area networks (WLAN). Their proposal combines deep learning and spectrum sensing to create an intelligent medium access system that can collect more data on channel usage, for example, instead of standard spectrum sensing technologies that can simply tell whether a channel is busy or not, how many devices are sharing the spectrum, and who are they. To implement the suggested MAC, a pre-trained CNN is deployed within the access point (AP) to detect the stations (STAs) involved in collisions (when more than one user transmits request to send (RTS) packets at the same time). According to the inference results, the AP schedules the data transmissions of the users involved and obtains a collision-free channel within a period. The SL-MAC protocol can retrieve information even if packets collide, training a deep neural network offline with historical radio frequency (RF) traces and inferring STAs involved in online collisions in near real-time. The SL-MAC protocol includes three operation steps. The first step is channel contention, in which the STAs compete for the channel according to the rules of a typical multiple access channel, such as the IEEE 802.11 scheme. The second step is collision detection and identification, starting from the reception of the RTS signals that arrive at the AP, where the pre-trained CNN algorithm resides. In this case, the protocol can distinguish if the channel is free (no RF signals), when there is only one RTS signal (collision-free), and when there are several overlapping RTS signals (collision). Finally, in the third step, when there is a collision, the transmission scheduling is executed according to the inference given by the pre-trained CNN algorithm. For this, the AP broadcasts a special CTS (Clear to Send) packet that contains a field with the transmission scheduling for each STA involved in the collision. The STAs not involved are kept silent (no packet transmission) during the retransmission period marked in the CTS packet. A comparison with the conventional IEEE 802.11 protocol was made to demonstrate the superiority of the SL-MAC protocol. The authors explored the impact of the inference error on the achieved throughput and looked at the top bound of throughput gain supplied by the CNN predictor. The benefits of the SL-MAC protocol were proven through extensive simulations. As a result, network capacity is increased as channel access efficiency improves.
Mennes et al. [30] proposed a deep neural network (DNN) strategy for predicting spectrum occupancy in the near future of unknown neighboring networks. They showed a multi-agent environment that employs RL and supervised learning approaches. Existing network schedulers can use this prediction to avoid collisions with nearby networks or other electromagnetic sources. Furthermore, unlike most existing MAC algorithms, which only strive to maximize their own network performance, this concept can change its operation to avoid cross-technology interference based on spectrum consumption by different technologies. Thus, the multi-channel access problem is studied in this paper, framed as a partially observable stochastic game in which N nodes have access to C channels in an environment where other networks use the same fraction of the spectrum.
Consequently, the proposed MAC algorithm selects the channel with less predicted interference from the set of available channels to perform its data transmission. In other words, the algorithm decides when and on which channel to transmit to avoid interference between networks. Therefore, the algorithm focuses on predicting the behavior of the interfering network cluster (INC). They look into deep multi-agent supervised online learning to enable learning in an unknown environment by predicting spectrum usage for upcoming slots at each node. They also construct a loss function that can be used to optimize predictions based on partially observable data. The five essential components of the proposed approach’s architecture are the spectrum monitor, preprocessing unit, a prediction unit, probability matrix, and transmission scheduler. The first four units are on each node, and the scheduler can be centralized or decentralized. The monitoring unit can capture the energy of the surrounding spectrum. This information is forwarded to the processing unit, preparing the observation to feed the predictor unit. The latter predicts the used slots in the upcoming super-frame used by the INC. As a result, it indicates if the slot is predicted as highly used or free. The predictor unit uses a DNN in a fully connected eight-layer model to optimize the prediction. They also recommended employing a swish activation function on all layers except the last (output), which uses a SoftMax activation on the preceding dimension. The SoftMax activation function ensures that each cell in the output matrix, which reflects the prediction, has a value between 0 and 1, indicating the likelihood of the INC using the slot. The information from the predictor is used to form a probability matrix; this matrix is the basis of the operation of the scheduler to select slots expected to be free to avoid collisions. An application scenario could be where a sender node transmits a slot request to the receiver.
The receiver replies with the best available slots according to the prediction. As a result, there is a channel synchronization between sender and receiver to avoid a collision. To evaluate the algorithm’s performance, they used two methods: simulation and real experimentation. In the case of simulation, a multiple frequencies time division multiple access (MF-TDM) discrete event simulator was used based on the time synchronized channel hopping (6TiSCH) simulator [31]. For real experimentation, the testbed proposed in the spectrum collaboration challenge (SC2) of the DARPA competition was used [32]. The results obtained in the simulation showed that the number of inter-networks collisions can be reduced by 30% compared to commonly used schedulers. In the case of real experimentation, the algorithm was shown to increase the overall throughput of the network in a variety of topologies and settings compared to an exponentially weighted moving average (EWMA) collision avoidance slot selection algorithm.

3.2. Efficiency on Energy Utilization

Power consumption is one of the most significant challenges in wireless networks since the mobile network nodes have a very scarce power supply based on batteries. This resource is crucial since the energy of each node is directly related to the network lifetime. Therefore, efficient use of this resource must be made to avoid inappropriate spending on the wireless network operation. For example, the transmission and reception of data are the functions that consume the most energy in the node. Therefore, avoiding the loss and retransmission of data promotes energy savings. There are several efforts in this regard; below, we present some works.
The authors in [33] discussed the challenge of managing the energy consumed by the sensor nodes of a network under the Internet of Things paradigm. This work proposes a model that uses deep reinforcement learning (DQN) to calibrate each node sensor and reduce power consumption according to the network operating environment. Their model integrates a RL agent with a deep learning long–short-term memory (LSTM) agent. The reinforcement agent takes advantage of its ability to make observations within a changing environment and take actions accordingly. The LSTM agent takes advantage of its ability to handle time series, which can retain long sequences due to its memory cell. The RL agent’s previous action input, state, and reward are taken as input vectors of the proposed architecture. They are delivered to several LSTM layers with a batch normalization layer to increase the network’s stability and speed. Finally, the architecture includes a classic deep learning layer (or fully connected). In evaluations, this combination demonstrated excellent results in maximizing the energy efficiency of the IoT network under a changing operating environment, as stated by the authors.
According to Zhang et al. [34], the fast-growing demand for wireless transmission has pushed mobile broadband to spread across numerous frequency bands. As a result, power consumption for multi-carrier information and processing is increasing proportionally, which conflicts with the energy efficiency requirements of 5G wire-free systems. This has led to the adoption of multi-carrier power amplifier (MCPA) technology, which allows many carriers to be supported by a single power amplifier. However, the authors raise an important point: how to distribute those carriers over numerous MCPAs and whether the allocation strategy should be dynamically changed. They theoretically articulated the problem of dynamic carrier allocation to MCPA due to this.
Furthermore, they suggested algorithms based on convex relaxation and deep learning. The deep learning technique uses feedforward neural networks (FNN), namely a multi-layer perceptron (MLP) and a one-dimensional CNN, to approximate the non-convex function to process the one-dimensional carrier allocation problem. According to their simulation data, the convex relaxation-based approach saves more energy than the deep learning-based scheme. On the other hand, the DL-based strategy outperforms the others in terms of computing complexity.
Due to the vast number and small size of sensor nodes deployed in various wireless sensor networks (WSNs) applications, the system throughput of nodes is insufficient, and energy is scarce. In addition, there are some network environments where node replacement is problematic. As a result, how to improve and extend the network life cycle is a pressing concern in today’s WSNs [35]. Cooperative communication is key to improving performance and expanding network coverage [36], as it uses the broadcast characteristics of wireless systems to optimize communication between nodes in the network. A cooperative communication scheme with relay selection for WSN based on DRL, called DQ-RSS, is proposed in [35]. This approach considers the cooperative communications process using a Markov decision process (MDP) model to solve a single-pair optimization problem because MDP is an optimal decision process for stochastic dynamical systems based on the Markov process (MP). Since the Q-learning algorithm suffers from a low learning speed in a large state space, deep neural networks’ function approximation and generalization capabilities are exploited to compensate for this limitation. As a result, to improve its operation, DQ-RSS combines deep learning with Q-learning to accelerate learning and perform optimal relay selection. According to the outage probability, a deep Q network (DQN) is trained. The channel state information (CSI) was observed for optimal relay selection among a set of candidates to participate in cooperative communication without the need for a network model prior data. To assess the Q-value of each action, the proposed deep learning network approach uses two convolutional layers and two fully connected layers. The rectified linear unit is used as the activation function in the first convolutional layer, which consists of 20 filters. The second convolutional layer employs the same non-linear rectifier and has 40 filters; 360 rectified linear units are used in the first Fc layer, whereas 180 units are used in the second [35]. Simulations were carried out to evaluate the performance of DQ-RS, using the basic parameters according to the IEEE 802.15.4 standard protocol in the 2.4 GHz frequency band. The results show that DQ-RSS outperforms the Q-RSS and random selection methods in all evaluated metrics, such as outage probability, energy consumption, and average utility of the network; for example, the outage probability of the DQ-RSS is roughly two times less than the random relay selection scheme and approximately 30% lower than the Q-RSS [35]. The authors propose, for future work, to consider the mobility of sensor nodes and complex channel models to study their impacts on actual WSN.
We increasingly depend on wireless networks to perform daily tasks, which has caused a rapid expansion of networks, such as in the case of smart cities. In this context, base stations are used to control network connectivity and traffic that is more densely loaded with traffic than others. The former presents a large amount of energy wasted caused by the power consumption used in data transmission. The above leads to the death of the station and reduces the network lifetime. Therefore, managing the transmission power of the base stations to improve energy utilization efficiency is necessary. The key to enabling modern cognitive wireless networks is that base stations have self-management capabilities and dynamic adjustment. To achieve the before-mentioned goal, one of the prerequisites is that the base stations can accurately predict the wireless traffic of the network. It is, nevertheless, a complex undertaking because data traffic is highly nonlinear and complicated, characterized by temporal and spatial connections [37]. However, most existing prediction methods do not consider temporal and spatial situations in the traffic data modeling process. The preceding makes it impossible to obtain an accurate forecast of the traffic of these networks. To address this problem, in [37], a convolutional network was proposed with a mechanism (called LA-ResNet) to solve the spatial-temporal modeling and predict wireless network traffic. The residual network, the recurrent neural network, and an attention mechanism are part of the LA-ResNet mechanism [37]. The residual network is initially employed in LA-ResNet to extract the spatial properties of wireless network data. The residual network’s output data are then fed into the recurrent neural network (RNN), which uses the memory unit’s time-series processing capacity to record the temporal information. The attention mechanism allows the focus to be directed on the intermediate output, which ties the residual network and RNN modules together. This increases the prediction’s accuracy and consistency. The results reveal that the LA-ResNet model outperforms other existing prediction approaches like RNN and 3DCNN in terms of traffic prediction. This allows us to conclude that the LA-ResNet mechanism is an excellent option to be installed in base stations.

3.3. Enhanced Data Transmission

The primary objective of any communication system is effective data transmission. However, it is not an easy task since multiple factors intervene, such as the type and amount of traffic, the communication channel congestion, etc., which makes it challenging to achieve this goal. In addition, this objective becomes a more significant challenge in the case of wireless networks. Because they are networks made up of devices with limited storage, processing, and energy capacities, with a shared transmission channel, they are unstable and prone to failures (path losses, signal fading). That is why there are several jobs focused on this goal. Below, we present some proposals that involve this objective and deep learning methods.
A routing algorithm capable of detecting link-level, node-level, and sink-level failures with high accuracy and low overhead in an IoT-enabled WSN is one of the central goals proposed in [38]. To achieve this goal, the authors propose a fault tolerance multi-objective deep reinforcement learning (MO-DRL) agent embedded in each sensor node of the network to optimize a data routing algorithm. MO-DRL detects faulty nodes in the WSN and removes them from the transmission path, achieving fault-free data routing. They also suggest a mobile sink method to complete an efficient solution, which collects data from sensor nodes with better performance than a stationary one, enhancing network dependability and lifetime. The MO-DRL algorithm’s multi-objective capability and support for multi-policy methods justify its use, allowing it to work with several conflicting objectives. In this work, for example, they propose minimizing message overhead, minimizing communication delay and maximizing network throughput, with all purposes contradictory to each other. In particular, the MO-DRL framework is composed of a DDQN, which consists of two DNN. The first one computes the current Q-value and updating the network parameters. The second neural network is responsible for calculating the target Q-value and periodically copying the network parameters obtained in the first neural network. Finally, the authors proposed a mobile sink node that collects data from sensor nodes through the shortest path, using the traveling salesman problem. The simulation results show that the proposal outperforms other algorithms in all the metrics evaluated.
Narejo et al. [39] proposed using deep learning for internet traffic prediction as a key objective to guarantee the quality of service (QoS) of the network connections applications. They propose three different architectures of deep belief network (DBN) through the stacking of restricted Boltzmann machines (RBMs) to create a DNN architecture. QoS contains a set of parameters, such as error rate, transmission rate, and other physical characteristics of the network, which must be guaranteed to meet a certain level of QoS. These parameters can be measured and improved through mechanisms integrated into the network nodes. In addition, the parameters are closely related to the network traffic load. Therefore, advanced knowledge of the future traffic load becomes useful. They discover the non-linear hierarchical nature found in the time series of internet traffic data using an artificial neural network with four hidden layers in each model. First, the network’s deep learning is implemented using unsupervised layer pretraining. The expected future traces of traffic load are then forecasted at the output layer, which is trained in a supervised way during the model’s fine-tuning stage. The findings of its proposal revealed accurate traffic predictions while modeling traffic data patterns and stochastic features, with a test dataset RMSE of 0.028.
Yang et al. [40] proposed employing the DRL method, known as deep Q-learning, to develop an intelligent agent for allocating computational resources for overloaded multi-user tasks. The future 5G services and IoT paradigm requires ultra-reliable low-latency communications (URLLC), which this agent provides. The approach is based on mobile edge computing (MEC), also known as fog computing, which is used to meet the computational demands of perimeter devices that lack such resources, such as wireless devices that make up the IoT. The suggested agent has a sophisticated, dynamic policy for allocating computational resources for numerous users, and it is embedded in the MEC node. The channel quality, data packet size, and current waiting time should also be considered. After the offloaded data have been processed in the MEC node, the proposed mechanism selects a low downlink transmission rate to reduce the likelihood of error and enhance the successful transmission rate while staying within the downlink channels’ delay limits.
Han et al. [41] proposed using deep learning to optimize a congestion control algorithm in wireless networks. The algorithm uses MLP structure with one hidden layer for network congestion detection, accurately distinguishing between data congestion and wireless network error when fast retransmission by a packet loss occurs in the sender. This distinction is critical in wireless networks, deciding what type of congestion control to apply and avoiding degradation of the network’s overall performance. They also propose reducing the transmission window size when packet loss occurs due to data congestion using an additive increase multiplicative decline (AIMD) algorithm. Finally, they suggest a proprietary mechanism to retransmit the lost packet solely in the event of a wireless fault. Still, they do not change the size of the current congestion window. The proposed algorithm aims to use deep learning to forecast network congestion and random packet loss. The minimum round-trip time (RTT) value and the current smoothed RTT value were utilized as inputs to the MLP structure, and the reason for packet loss was used as the output layer’s response to train the algorithm. The study results reveal that the performance in the environment without wireless loss is comparable to the protocols that were tested. However, when the amount of wireless loss increases, the suggested algorithm beats the others since it determines the cause of the lost packet rather than reducing the transmission window as the other protocols do.
The authors of [42] proposed an integrated method that combines a classical DNN (as multilayer perceptron) with an improved K-nearest neighbor (KNN) algorithm to deal with the indoor location problem (or location fingerprinting). The conventional KNN provides the foundation for the upgraded KNN algorithm. The original KNN algorithm, on the other hand, ignores the influence of nearby locations. The algorithm’s general procedure can be broken into two sections. The Wi-Fi received signal strength indicator (RSSI) fingerprint dataset was first classified using the DNN method. The modified KNN technique is then used in the second phase to classify these alternative locations in a specific class to establish the mobile device’s final position. The DNN algorithm trains the dataset in the offline stage and predicts in the online stage. The KNN algorithm classifies these favorable positions into a specific class to determine the final part of the online step. When the first phase is completed, the entire positioning scene is divided into several clusters to choose the most likely collection to which the target belongs, increasing the number of learning samples for the DNN classifier algorithm. The interference from other groups can be reduced after knowing the particular cluster, and the calculation cost of the KNN algorithm can be reduced in the second phase.
The proposal’s performance was compared to that of other traditional indoor location algorithms, including random forest (RF), KNN, support vector machine (SVM), and decision tree (DT), among others. The proposed algorithm outperforms the other since it takes advantage of both algorithms’ strengths.

4. Discussion

Analyzing the behavior of cognitive data networks, especially wireless networks, is a great challenge. Several aspects must be considered regarding the environments where the networks are deployed: the number of devices (i.e., IoT devices [43,44,45]), the characteristics of the applications (i.e., smart environments [46,47,48]), and the transmission technologies [26,49,50], among others. The nonlinearity and complexity of data traffic flow in such networks, for example, are characterized by temporal and spatial correlation. Therefore, their study is complex and even more so if one wants to predict its behavior.
In addition, in the case of spectrum usage, it is necessary to extract meaningful information that leads, as a result, to a set of massive and complex data that requires sophisticated and advanced algorithms for its analysis. Moreover, in data networks, learning models must allow sequential decisions to be made based on continuous feedback or prior knowledge using algorithms that can work with multiple objectives in conflict with the flexibility and speed to make observations within a changing environment and take actions accordingly.
For this reason, summarizing the review works, most include the use of combined learning methods, highlighting deep reinforcement learning (DRL) and the convolutional neural network (CNN), for example, the multilayer perceptron (MLP) in conjunction with reinforcement learning (RL), MLP with the CNN itself, residual network with the recurrent neural network (RNN), RL and long–short-term memory LSTM, and double deep reinforcement learning, among others. However, even with this clarity, it is not an easy task to select the appropriate deep learning method for each communication objective within the network, linked to the great challenge that implies the input parameters selection and tuning of the model’s hyper-parameters.
Table 3 presents a summary of the previous challenges and the application of deep learning for addressing them.

5. Recommendations

This section presents the most important recommendations that the authors of this paper obtained when conducting this review on applying DL models to the data networks domain. These recommendations become the future challenges that must be addressed in this matter.
ML and DL are solving old problems in all areas of knowledge that remained unsolved, and now, thanks to new studies in this area and new computing technologies, outstanding results have been obtained. The use of ML and, in particular, DL in wireless data connectivity is an exceptional area of opportunity to build algorithms that can make more efficient decisions than traditional methods since the behavior of data networks is a challenging problem to solve automatically given the non-linearity and complexity of data.
For example, DL models work very well to capture the behavior of patterns that change over time, such as RNNs that have a kind of artificial memory, as in the case of the LSTM in conjunction with the RL. Other DL models are instrumental in capturing spatial patterns and correlating the information with different designs, as is the case with CNN. The preceding is ideal for data networks because their data have a non-linear behavior characterized by a high temporal and spatial correlation.
Consequently, a mixture of various DL algorithms to capture these behavior patterns has to be done in a single framework to solve the optimization of the different responses needed in the operation of data networks.
Another vital recommendation has to do with taking advantage of the use of DL to address problems in a multidimensional way. It automatically optimizes several parameters simultaneously with ease and without human error, unlike traditional methods that do not use DL or only one problem at a time. Therefore, the future trend in the context of data networks is to address their optimization problem with multivariate models to predict different responses simultaneously. This approach can better use the data collected for each answer and the degree of correlation between them, known as “information borrowing”. From our point of view, we believe that work should be done to understand the main problems in data networks rather than understanding the scenarios and types of network applications. In this way, it is possible to establish the appropriate “entries” for each problem and create “frameworks” focused on the problems rather than the types of networks. With this, more general application solutions can be built. The user of these frameworks concentrates only on selecting the problems he wants to solve and entering the established entries without needing to be an expert in the design of DL solutions.
Finally, DL algorithms must be created to be “automatically retrained” online. To prevent it from becoming obsolete in the short term, one must incorporate new data into the DL model as further information is received during its operation within the data network. Today, when there are significant changes in the model inputs, the models must be retrained “offline” and re-deployed to the system for operation, resulting in prohibitive costs for enterprises.

6. Limitations

Like every systematic review, this one has three significant limitations: (1) a limited number of databases consulted; (2) the search phrases used; and (3) the risk of bias in works selection.
By using three databases as query sources, interesting works in cognitive data networks could be left aside; however, this limitation was solved, as the databases consulted are the most relevant for this area.
An incorrect selection of search phrases can leave out research with significant contributions to the community of cognitive data networks. That is why the search words as well as their operators were valued in detail.
The number of publications on the use of DL in data networks is increasing, so the selection of the works to be included in this review conveyed the risk of bias due to the authors’ personal preferences. Therefore, the inclusion and exclusion criteria were clearly defined to avoid bias and a cross-assessment when analyzing the abstracts and full texts.

7. Conclusions

This paper presents a review of new research using deep learning algorithms to optimize the operation of cognitive data networks, focused on wireless spectrum management, energy efficiency, and improved data transmission. It was found that there are exciting applications that show that this trend will continue to grow, as they help solve significant problems in this area. The traditional methods that have been used have focused on creating reactive mechanisms that cannot collect data to continue learning, with no possibility of predicting critical changes in network operation.
The works studied in this paper showed a preference for using a combination of DL methods in a single reference framework, highlighting the use of convolutional neural network (CNN) and deep reinforcement learning (DRL) methods. The preceding is justified since the flow of traffic and the spectrum usage in the networks has a high degree of nonlinearity and complexity, characterized by temporal and spatial correlation. Furthermore, this entails generating a massive and complex dataset requiring sophisticated and advanced algorithms for its analysis.
Creating frameworks that address data network problems in a multidimensional way is crucial to integrating several issues in the same framework and predicting different responses simultaneously, which is known in statistics as multivariate models.
For everything already mentioned, it is undeniable that next-generation data networks must include deep learning (DL) methods in designing their operating mechanisms if they want to meet the data transmission quality required by new network applications.

Author Contributions

Conceptualization, R.B.-M. and P.C.S.-M.; methodology, O.A.M.-L. and L.E.A.-R.; formal analysis, R.B.-M., O.A.M.-L. and P.C.S.-M.; data curation, J.I.N.H.; writing—original draft preparation, R.B.-M. and P.C.S.-M.; writing—review and editing, J.I.N.H., L.E.A.-R. and O.A.M.-L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

This study was developed within the framework of the project “Contribution to homecare systems for older adults through iTV and IoT” from the Universidad de Colima and the University of Vigo.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, M.; Challita, U.; Saad, W.; Yin, C.; Debbah, M. Machine Learning for Wireless Networks with Artificial Intelligence: A Tutorial on Neural Networks. arXiv 2017, arXiv:1710.02913. [Google Scholar]
  2. Thomas, R.W.; DaSilva, L.A.; MacKenzie, A.B. Cognitive Networks. In Proceedings of the First IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, Baltimore, MD, USA, 8–11 November 2005; pp. 352–360. [Google Scholar]
  3. Fortuna, C.; Mohorcic, M. Trends in the Development of Communication Networks: Cognitive Networks. Comput. Netw. 2009, 53, 1354–1376. [Google Scholar] [CrossRef]
  4. Kulin, M.; Kazaz, T.; Moerman, I.; De Poorter, E. End-to-End Learning From Spectrum Data: A Deep Learning Approach for Wireless Signal Identification in Spectrum Monitoring Applications. IEEE Access 2018, 6, 18484–18501. [Google Scholar] [CrossRef]
  5. Samuel, A.L. Some Studies in Machine Learning Using the Game of Checkers. IBM J. Res. Dev. 1959, 3, 210–229. [Google Scholar] [CrossRef]
  6. Witten, I.; Frank, E.; Hall, M. Data Mining: Practical Machine Learning Tools and Techniques, 3rd ed.; Elsevier, Ed.; Elsevier: New York, NY, USA, 2011; ISBN 978-0-08-089036-4. [Google Scholar]
  7. Kononenko, I.; Kukar, M. Machine Learning and Data Mining: Introduction to Principles and Algorithms; Horwood Publishing Limited: Woodgate, UK, 2007; ISBN 1-904275-21-4. [Google Scholar]
  8. Dougherty, G. Pattern Recognition and Classification-An Introduction, 1st ed.; Springer: New York, NY, USA, 2013; ISBN 978-1-4614-5322-2. [Google Scholar]
  9. Kohonen, T. Self-Organizing Maps, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1997; ISBN 978-3-642-97966-8. [Google Scholar]
  10. Lewis, N.D. Deep Learning Made Easy with R: A Gentle Introduction For Data Science; CreateSpace Independent Publishing Platform: Scotts Valley, CA, USA, 2016; ISBN 978-1-5195-1421-9. [Google Scholar]
  11. LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  12. Goldberg, Y. A Primer on Neural Network Models for Natural Language Processing. J. Artif. Intell. Res. 2016, 57, 345–420. [Google Scholar] [CrossRef] [Green Version]
  13. Makridakis, S.; Spiliotis, E.; Assimakopoulos, V. Statistical and Machine Learning Forecasting Methods: Concerns and Ways Forward. PLoS ONE 2018, 13, e0194889. [Google Scholar] [CrossRef] [Green Version]
  14. Alvarado Villa, D.A.; Montesinos López, O.A.; Santana-Mancilla, P.C. Training of an Intelligent Agent to Improve the Gaming Experience for Video Gamers. Av. IHC 2021, 123–125. [Google Scholar] [CrossRef]
  15. Menden, M.P.; Iorio, F.; Garnett, M.; McDermott, U.; Benes, C.H.; Ballester, P.J.; Saez-Rodriguez, J. Machine Learning Prediction of Cancer Cell Sensitivity to Drugs Based on Genomic and Chemical Properties. PLoS ONE 2013, 8, e61318. [Google Scholar] [CrossRef] [Green Version]
  16. Alipanahi, B.; Delong, A.; Weirauch, M.T.; Frey, B.J. Predicting the Sequence Specificities of DNA- and RNA-Binding Proteins by Deep Learning. Nat. Biotechnol. 2015, 33, 831–838. [Google Scholar] [CrossRef]
  17. Tavanaei, A.; Anandanadarajah, N.; Maida, A.; Loganantharaj, R. A Deep Learning Model for Predicting Tumor Suppressor Genes and Oncogenes from PDB Structure. bioRxiv 2017. [Google Scholar] [CrossRef] [Green Version]
  18. Angermueller, C.; Lee, H.J.; Reik, W.; Stegle, O. DeepCpG: Accurate Prediction of Single-Cell DNA Methylation States Using Deep Learning. Genome Biol. 2017, 18, 67. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Montesinos-López, A.; Montesinos-López, O.A.; Gianola, D.; Crossa, J.; Hernández-Suárez, C.M. Multi-Environment Genomic Prediction of Plant Traits Using Deep Learners with Dense Architecture. G3 Genes Genomes Genet. 2018, 8, 3813–3828. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Montesinos-López, O.A.; Montesinos-López, A.; Crossa, J.; Gianola, D.; Hernández-Suárez, C.M.; Martín-Vallejo, J. Multi-Trait, Multi-Environment Deep Learning Modeling for Genomic-Enabled Prediction of Plant Traits. G3 Genes Genomes Genet. 2018, 8, 3829–3840. [Google Scholar] [CrossRef] [Green Version]
  21. Montesinos-López, O.A.; Martín-Vallejo, J.; Crossa, J.; Gianola, D.; Hernández-Suárez, C.M.; Montesinos-López, A.; Juliana, P.; Singh, R. A Benchmarking between Deep Learning, Support Vector Machine and Bayesian Threshold Best Linear Unbiased Prediction for Predicting Ordinal Traits in Plant Breeding. G3 Genes Genomes Genet. 2019, 9, 601–618. [Google Scholar] [CrossRef] [Green Version]
  22. Montesinos-López, O.A.; Montesinos-López, A.; Tuberosa, R.; Maccaferri, M.; Sciara, G.; Ammar, K.; Crossa, J. Multi-Trait, Multi-Environment Genomic Prediction of Durum Wheat with Genomic Best Linear Unbiased Predictor and Deep Learning Methods. Front. Plant Sci. 2019, 10, 1311. [Google Scholar] [CrossRef] [Green Version]
  23. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  24. Fadlullah, Z.M.; Tang, F.; Mao, B.; Kato, N.; Akashi, O.; Inoue, T.; Mizutani, K. State-of-the-Art Deep Learning: Evolving Machine Intelligence Toward Tomorrow’s Intelligent Network Traffic Control Systems. IEEE Commun. Surv. Tutor. 2017, 19, 2432–2455. [Google Scholar] [CrossRef]
  25. Besher, K.M.; Nieto-Hipolito, J.I.; Buenrostro-Mariscal, R.; Ali, M.Z. Spectrum Based Power Management for Congested IoT Networks. Sensors 2021, 21, 2681. [Google Scholar] [CrossRef]
  26. Besher, K.M.; Nieto-Hipolito, J.-I.; Vazquez Briseno, M.; Buenrostro Mariscal, R. SenPUI: Solutions for Sensing and Primary User Interference in Cognitive Radio Implementation of a Wireless Sensor Network. Wirel. Commun. Mob. Comput. 2019, 2019, 1–8. [Google Scholar] [CrossRef]
  27. Yuan, Y.; Li, Z.; Liu, Z.; Yang, Y.; Guan, X. Double Deep Q-Network Based Distributed Resource Matching Algorithm for D2D Communication. IEEE Trans. Veh. Technol. 2022, 71, 984–993. [Google Scholar] [CrossRef]
  28. Shen, A.; Liu, Y.; Zhang, Y.; Guo, B.; Xu, Z.T.; Shen, J.H.; Fang, Y. The Method of Interference Recognition in Mobile Communication Network Based on Deep Learning. In Proceedings of the Lecture Notes in Electrical Engineering; Springer: Berlin/Heidelberg, Germany, 2019; Volume 494, pp. 296–306. [Google Scholar]
  29. Yang, B.; Cao, X.; Omotere, O.; Li, X.; Han, Z.; Qian, L. Improving Medium Access Efficiency with Intelligent Spectrum Learning. IEEE Access 2020, 8, 94484–94498. [Google Scholar] [CrossRef]
  30. Mennes, R.; De Figueiredo, F.A.P.; Latré, S. Multi-Agent Deep Learning for Multi-Channel Access in Slotted Wireless Networks. IEEE Access 2020, 8, 95032–95045. [Google Scholar] [CrossRef]
  31. Palattella, M.R.; Watteyne, T.; Wang, Q.; Muraoka, K.; Accettura, N.; Dujovne, D.; Grieco, L.A.; Engel, T. On-the-Fly Bandwidth Reservation for 6TiSCH Wireless Industrial Networks. IEEE Sens. J. 2016, 16, 550–560. [Google Scholar] [CrossRef] [Green Version]
  32. Tilghman, P. Will Rule the Airwaves: A DARPA Grand Challenge Seeks Autonomous Radios to Manage the Wireless Spectrum. IEEE Spectr. 2019, 56, 28–33. [Google Scholar] [CrossRef]
  33. Ashiquzzaman, A.; Lee, H.; Um, T.-W.; Kim, J. Energy-Efficient IoT Sensor Calibration with Deep Reinforcement Learning. IEEE Access 2020, 8, 97045–97055. [Google Scholar] [CrossRef]
  34. Zhang, S.; Xiang, C.; Cao, S.; Xu, S.; Zhu, J. Dynamic Carrier to MCPA Allocation for Energy Efficient Communication: Convex Relaxation versus Deep Learning. IEEE Trans. Green Commun. Netw. 2019, 3, 628–640. [Google Scholar] [CrossRef]
  35. Su, Y.; Lu, X.; Zhao, Y.; Huang, L.; Du, X. Cooperative Communications with Relay Selection Based on Deep Reinforcement Learning in Wireless Sensor Networks. IEEE Sens. J. 2019, 19, 9561–9569. [Google Scholar] [CrossRef]
  36. Ibrahim, A.S.; Sadek, A.K.; Su, W.; Liu, K.J.R. Cooperative Communications with Relay-Selection: When to Cooperate and Whom to Cooperate With? IEEE Trans. Wirel. Commun. 2008, 7, 2814–2827. [Google Scholar] [CrossRef] [Green Version]
  37. Li, M.; Wang, Y.; Wang, Z.; Zheng, H. A Deep Learning Method Based on an Attention Mechanism for Wireless Network Traffic Prediction. Ad Hoc Netw. 2020, 107, 102258. [Google Scholar] [CrossRef]
  38. Agarwal, V.; Tapaswi, S.; Chanak, P. Intelligent Fault-Tolerance Data Routing Scheme for IoT-Enabled WSNs. IEEE Internet Things J. 2022. [Google Scholar] [CrossRef]
  39. Narejo, S.; Pasero, E. An Application of Internet Traffic Prediction with Deep Neural Network; Springer: Cham, Switzerland, 2018; pp. 139–149. [Google Scholar]
  40. Yang, T.; Hu, Y.; Gursoy, M.C.; Schmeink, A.; Mathar, R. Deep Reinforcement Learning Based Resource Allocation in Low Latency Edge Computing Networks. In Proceedings of the 2018 15th International Symposium on Wireless Communication Systems (ISWCS), Lisbon, Portugal, 28–31 August 2018; pp. 1–5. [Google Scholar]
  41. Han, K.; Hwang, A.; Lee, J.Y.; Kim, B.C. Design and Performance Evaluation of Enhanced Congestion Control Algorithm for Wireless TCP by Using a Deep Learning. In Proceedings of the International Conference on Electronics, Information and Communication, ICEIC 2018, Honolulu, HI, USA, 24–27 January 2018; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2018; Volume 2018, pp. 1–2. [Google Scholar]
  42. Dai, P.; Yang, Y.; Wang, M.; Yan, R. Combination of DNN and Improved KNN for Indoor Location Fingerprinting. Wirel. Commun. Mob. Comput. 2019, 2019, 4283857. [Google Scholar] [CrossRef]
  43. Santana-Mancilla, P.C.; Anido-Rifón, L.E.; Contreras-Castillo, J.; Buenrostro-Mariscal, R. Heuristic Evaluation of an IoMT System for Remote Health Monitoring in Senior Care. Int. J. Environ. Res. Public Health 2020, 17, 1586. [Google Scholar] [CrossRef] [PubMed]
  44. Durán-Vega, L.A.; Santana-Mancilla, P.C.; Buenrostro-Mariscal, R.; Contreras-Castillo, J.; Anido-Rifón, L.E.; García-Ruiz, M.A.; Montesinos-López, O.A.; Estrada-González, F. An IoT System for Remote Health Monitoring in Elderly Adults through a Wearable Device and Mobile Application. Geriatrics 2019, 4, 34. [Google Scholar] [CrossRef] [Green Version]
  45. Guzman-Sandoval, V.M.; Gaytan-Lugo, L.S.; Santana-Mancilla, P.C. I-Care: An IoMT Remote Monitoring System Of Physiological Pain in Pediatric Patients. In Proceedings of the 2021 Mexican International Conference on Computer Science (ENC), Morelia, Mexico, 9 August 2021; pp. 1–4. [Google Scholar]
  46. Santana-Mancilla, P.C.; Contreras-Castillo, J.; Anido-Rifón, L.E. Designing for Social ITV: Improving the Shared Experience of Home Care Systems; ACM: New York, NY, USA, 2019; pp. 1–4. [Google Scholar]
  47. Santana-Mancilla, P.; Anido-Rifón, L. The Technology Acceptance of a TV Platform for the Elderly Living Alone or in Public Nursing Homes. Int. J. Environ. Res. Public Health 2017, 14, 617. [Google Scholar] [CrossRef]
  48. Santana-Mancilla, P.C.; Magaña Echeverría, M.A.; Rojas Santos, J.C.; Nieblas Castellanos, J.A.; Salazar Díaz, A.P. Towards Smart Education: Ambient Intelligence in the Mexican Classrooms. Procedia-Soc. Behav. Sci. 2013, 106, 3141–3148. [Google Scholar] [CrossRef] [Green Version]
  49. Guerrero Ibanez, J.A.; Garcia Morales, L.A.; Contreras Castillo, J.J.; Buenrostro Mariscal, R.; Cosio Leon, M. HYRMA: A Hybrid Routing Protocol for Monitoring of Marine Environments. IEEE Latin Am. Trans. 2015, 13, 1562–1568. [Google Scholar] [CrossRef]
  50. Buenrostro-Mariscal, R.; Cosio-Leon, M.; Nieto-Hipolito, J.-I.; Guerrero-Ibanez, J.-A.; Vazquez-Briseno, M.; Sanchez-Lopez, J.-D. WSN-HaDaS: A Cross-Layer Handoff Management Protocol for Wireless Sensor Networks, a Practical Approach to Mobility. IEICE Trans. Commun. 2015, E98.B, 1333–1344. [Google Scholar] [CrossRef]
Figure 1. Connected environments through cognitive data networks.
Figure 1. Connected environments through cognitive data networks.
Applsci 12 06262 g001
Figure 2. Search strategy flow diagram.
Figure 2. Search strategy flow diagram.
Applsci 12 06262 g002
Table 1. Search strategy and queries used in each database.
Table 1. Search strategy and queries used in each database.
DatabaseSearch QueryResults
IEEE Xplore(“Document Title”:network) AND (“Document Title”:”resource allocation”) AND (“Document Title”:deep learning)69
(“Document Title”:wireless) AND (“Document Title”:communications) AND (“Document Title”:“deep learning”)32
(“Document Title”:network) AND (“Document Title”:traffic prediction) AND (“Document Title”:deep learning)24
(“Document Title”:network) AND (“Document Title”:”congestion control”) AND (“Document Title”:deep learning)6
(“Document Title”:wireless) AND (“Document Title”:spectrum) AND (“Document Title”:“deep learning”)5
ScienceDirectTITLE (“deep learning”) AND (network) AND (“resource allocation”)159
TITLE (“deep learning”) AND (wireless) AND (communications)218
TITLE (“deep learning”) AND (wireless) AND (spectrum)118
TITLE (“deep learning”) AND (network) AND (“traffic prediction”)19
TITLE (“deep learning”) AND (network) AND (“congestion control”)14
Springer LinkTITLE = (“deep learning”) AND (wireless) AND (communications)620
TITLE = (“deep learning”) AND (wireless) AND (spectrum)152
TITLE = (“deep learning”) AND (network) AND (“resource allocation”)118
TITLE = (“deep learning”) AND (network) AND (“traffic prediction”)63
TITLE = (“deep learning”) AND (network) AND (“congestion control”)6
Table 2. List of journals, books, and conferences where the reviewed papers were published.
Table 2. List of journals, books, and conferences where the reviewed papers were published.
TitlePublisher/OrganizerTypen
IEEE AccessIEEEJournal5
IEEE Transactions on Green Communications and NetworkingIEEEJournal1
IEEE Transactions on Vehicular TechnologyIEEEJournal1
IEEE Internet of Things JournalIEEEJournal1
Wireless Communications and Mobile ComputingHINDAWIJournal1
Ad Hoc NetworksELSEVIERJournal1
Multidisciplinary Approaches to Neural ComputingSpringerBook1
International Conference on Signal and Information Processing, Networking, And ComputersSpringerConference1
International Conference on Electronics, Information, and CommunicationsIEEEConference1
International Symposium on Wireless Communication SystemsIEEEConference1
Table 3. Challenges of cognitive networks solved by deep learning applications.
Table 3. Challenges of cognitive networks solved by deep learning applications.
ChallengeWorkDeep Learning Technique to Solve the ChallengeIssue to Solve
Wireless spectrum management[4,28,29]Convolutional Neural NetworkImprove and regulate the radio spectrum utilization;
Smart learning of spectrum for the access control medium;
Spectrum interference image recognition.
[27,30]Multilayer Perceptron and Reinforcement LearningPredict spectrum occupancy in wireless network.
Efficiency on energy utilization[33,35]Reinforcement LearningProlong the network life-time.
[34]Multilayer Perceptron and Convolutional Neural NetworkAllocate, dynamically, multiple carriers through a single power amplifier.
[37]Residual Network and Recurrent Neural NetworkWireless network traffic prediction to improve energy utilization in base stations.
Enhanced data transmission[39]Deep Belief Network and Restricted Boltzmann MachineInternet traffic prediction to guarantee QoS.
[41]Multilayer PerceptronCongestion control and wireless error detection.
[38,40]Deep Reinforcement LearningComputational resources allocation for offloaded tasks.
[42]Multilayer Perceptron and K-NearestNeighborDeal with the indoor location problem and determine the final position of the mobile device.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Buenrostro-Mariscal, R.; Santana-Mancilla, P.C.; Montesinos-López, O.A.; Nieto Hipólito, J.I.; Anido-Rifón, L.E. A Review of Deep Learning Applications for the Next Generation of Cognitive Networks. Appl. Sci. 2022, 12, 6262. https://doi.org/10.3390/app12126262

AMA Style

Buenrostro-Mariscal R, Santana-Mancilla PC, Montesinos-López OA, Nieto Hipólito JI, Anido-Rifón LE. A Review of Deep Learning Applications for the Next Generation of Cognitive Networks. Applied Sciences. 2022; 12(12):6262. https://doi.org/10.3390/app12126262

Chicago/Turabian Style

Buenrostro-Mariscal, Raymundo, Pedro C. Santana-Mancilla, Osval Antonio Montesinos-López, Juan Ivan Nieto Hipólito, and Luis E. Anido-Rifón. 2022. "A Review of Deep Learning Applications for the Next Generation of Cognitive Networks" Applied Sciences 12, no. 12: 6262. https://doi.org/10.3390/app12126262

APA Style

Buenrostro-Mariscal, R., Santana-Mancilla, P. C., Montesinos-López, O. A., Nieto Hipólito, J. I., & Anido-Rifón, L. E. (2022). A Review of Deep Learning Applications for the Next Generation of Cognitive Networks. Applied Sciences, 12(12), 6262. https://doi.org/10.3390/app12126262

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop