applsci-logo

Journal Browser

Journal Browser

Applications of AI for 5G and Beyond Communications: Network Management, Operation, and Automation

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (15 September 2020) | Viewed by 44348

Special Issue Editors


E-Mail Website
Guest Editor
Wireless Communications and Artificial Intelligence Lab., Kookmin University, Seongbuk-Gu, Seoul 136-702, Republic of Korea
Interests: AI; artificial intelligence; big data; internet of energy; health; 5G/6G wireless communications; multimedia; computer vision; IoT platform
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Kookmin University, Seoul, Republic of Korea
Interests: small cell networks, convergence networks, 5G/6G communications, optical wireless communications, IoT, and artificial intelligence

E-Mail Website
Guest Editor
Advanced Wireless and Communication Research Center (AWCC), The University of Electro-Communications, Tokyo 182-8585, Japan
Interests: wireless ad-hoc network; cognitive radio; wireless sensing technology; wireless network protocol; mobile network communications; ITS and software radio
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Engineering, Universitat Politècnica de València, 46022 Valencia, Spain
Interests: mobile ad hoc networks; vehicular networks; mobile communication; computer networks; wireless networks; MANETs; VANETs
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial Intelligence (AI) and Machine Learning (ML) are the quickest growing, most demanded techniques for the development of information and communication technology. Recent advances in AI and ML are providing unbelievable solutions and performing types of tasks that once seemed impossible. Applications of AI techniques in wireless communications will facilitate automation in network management and operations. Fifth-Generation (5G) and beyond communication systems are expected to provide services with massive connectivity, ultra-high data-rate, ultra-low latency, extremely high security, and extremely low energy consumption. It will be very difficult to achieve these goals without automation of the network systems. Applications of AI techniques in communication technologies are expected to make all these possible. AI can provide intelligent solutions for the design, management, and optimization of wireless resources. AI/ML techniques improve the way that we currently use network management, operations, and automation. They will be a great platform for the supporting software-defined networking (SDN) and network function virtualization (NFV), which are considered important technologies for the deployment of 5G and beyond communication systems. The increased complexity arising from the presence of heterogeneous network systems will be handled by AI techniques.

This special issue calls for high-quality unpublished research works on recent advances related to the application of AI for heterogeneous wireless communication systems. Contributions may present and solve open research problems, integrate efficient novel solutions, performance evaluation, and comparison with existing solutions. Theoretical as well as experimental studies for typical and newly emerging AI techniques, and use cases enabled by recent advances in wireless communications are encouraged. High-quality review papers are also welcomed.

Potential topics include, but are not limited to the following:

  • Theoretical approaches and methodologies for AI-enabled communication systems
  • AI and ML for network management
  • AI-enabled networks design and architecture
  • AI and ML in wireless communications and networking
  • Radio resource management
  • AI-enabled SDN and NFV
  • AI- enabled dynamic network slicing
  • AI- enabled security methods for IoT
  • AI-based network intelligence for IoT
  • Sequential analysis and reinforcement learning for wireless communications
  • AI-enabled ultra-dense network
  • Big data enabled wireless networking
  • AI- enabled network pricing models
  • Graph computing for communication networks
  • Signal processing over networks and graphs
  • Energy efficient network operation

Prof. Dr. Yeong Min Jang
Dr. Mostafa Zaman Chowdhury
Prof. Dr. Takeo Fujii
Prof. Dr. Juan Carlos Cano
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Artificial Intelligence
  • 5G communication
  • network management

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 6116 KiB  
Article
Artificial Intelligence Enabled Routing in Software Defined Networking
by Yan-Jing Wu, Po-Chun Hwang, Wen-Shyang Hwang and Ming-Hua Cheng
Appl. Sci. 2020, 10(18), 6564; https://doi.org/10.3390/app10186564 - 20 Sep 2020
Cited by 34 | Viewed by 6704
Abstract
Software defined networking (SDN) is an emerging networking architecture that separates the control plane from the data plane and moves network management to a central point, called the controller. The controller is responsible for preparing the flow tables of each switch in the [...] Read more.
Software defined networking (SDN) is an emerging networking architecture that separates the control plane from the data plane and moves network management to a central point, called the controller. The controller is responsible for preparing the flow tables of each switch in the data plane. Although dynamic routing can perform rerouting in case of congestion by periodically monitoring the status of each data flow, problems concerning a suitable monitoring period duration and lack of learning ability from past experiences to avoid similar but ineffective route decisions remain unsolved. This paper presents an artificial intelligence enabled routing (AIER) mechanism with congestion avoidance in SDN, which can not only alleviate the impact of monitoring periods with dynamic routing, but also provide learning ability and superior route decisions by introducing artificial intelligence (AI) technology. We evaluate the performance of the proposed AIER mechanism on the Mininet simulator by installing three additional modules, namely, topology discovery, monitoring period, and an artificial neural network, in the control plane. The effectiveness and superiority of our proposed AIER mechanism are demonstrated by performance metrics, including average throughput, packet loss ratio, and packet delay versus data rate for different monitoring periods in the system. Full article
Show Figures

Figure 1

14 pages, 2655 KiB  
Article
Effective Feature Selection Method for Deep Learning-Based Automatic Modulation Classification Scheme Using Higher-Order Statistics
by Sang Hoon Lee, Kwang-Yul Kim and Yoan Shin
Appl. Sci. 2020, 10(2), 588; https://doi.org/10.3390/app10020588 - 13 Jan 2020
Cited by 20 | Viewed by 3517
Abstract
Recently, in order to satisfy the requirements of commercial communication systems and military communication systems, automatic modulation classification (AMC) schemes have been considered. As a result, various artificial intelligence algorithms such as a deep neural network (DNN), a convolutional neural network (CNN), and [...] Read more.
Recently, in order to satisfy the requirements of commercial communication systems and military communication systems, automatic modulation classification (AMC) schemes have been considered. As a result, various artificial intelligence algorithms such as a deep neural network (DNN), a convolutional neural network (CNN), and a recurrent neural network (RNN) have been studied to improve the AMC performance. However, since the AMC process should be operated in real time, the computational complexity must be considered low enough. Furthermore, there is a lack of research to consider the complexity of the AMC process using the data-mining method. In this paper, we propose a correlation coefficient-based effective feature selection method that can maintain the classification performance while reducing the computational complexity of the AMC process. The proposed method calculates the correlation coefficients of second, fourth, and sixth-order cumulants with the proposed formula and selects an effective feature according to the calculated values. In the proposed method, the deep learning-based AMC method is used to measure and compare the classification performance. From the simulation results, it is indicated that the AMC performance of the proposed method is superior to the conventional methods even though it uses a small number of features. Full article
Show Figures

Figure 1

16 pages, 4006 KiB  
Article
Adaptive Natural Gradient Method for Learning of Stochastic Neural Networks in Mini-Batch Mode
by Hyeyoung Park and Kwanyong Lee
Appl. Sci. 2019, 9(21), 4568; https://doi.org/10.3390/app9214568 - 28 Oct 2019
Cited by 2 | Viewed by 3019
Abstract
Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of information geometry on stochastic neuromanifold, and [...] Read more.
Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of information geometry on stochastic neuromanifold, and is known to have ideal convergence properties. Despite its theoretical advantages, the pure natural gradient has some limitations that prevent its practical usage. In order to get the explicit value of the natural gradient, it is required to know true probability distribution of input variables, and to calculate inverse of a matrix with the square size of the number of parameters. Though an adaptive estimation of the natural gradient has been proposed as a solution, it was originally developed for online learning mode, which is computationally inefficient for the learning of large data set. In this paper, we propose a novel adaptive natural gradient estimation for mini-batch learning mode, which is commonly adopted for big data analysis. For two representative stochastic neural network models, we present explicit rules of parameter updates and learning algorithm. Through experiments on three benchmark problems, we confirm that the proposed method has superior convergence properties to the conventional methods. Full article
Show Figures

Figure 1

15 pages, 7599 KiB  
Article
A Reinforcement-Learning-Based Distributed Resource Selection Algorithm for Massive IoT
by Jing Ma, So Hasegawa, Song-Ju Kim and Mikio Hasegawa
Appl. Sci. 2019, 9(18), 3730; https://doi.org/10.3390/app9183730 - 6 Sep 2019
Cited by 19 | Viewed by 6714
Abstract
Massive IoT including the large number of resource-constrained IoT devices has gained great attention. IoT devices generate enormous traffic, which causes network congestion. To manage network congestion, multi-channel-based algorithms are proposed. However, most of the existing multi-channel algorithms require strict synchronization, an extra [...] Read more.
Massive IoT including the large number of resource-constrained IoT devices has gained great attention. IoT devices generate enormous traffic, which causes network congestion. To manage network congestion, multi-channel-based algorithms are proposed. However, most of the existing multi-channel algorithms require strict synchronization, an extra overhead for negotiating channel assignment, which poses significant challenges to resource-constrained IoT devices. In this paper, a distributed channel selection algorithm utilizing the tug-of-war (TOW) dynamics is proposed for improving successful frame delivery of the whole network by letting IoT devices always select suitable channels for communication adaptively. The proposed TOW dynamics-based channel selection algorithm has a simple reinforcement learning procedure that only needs to receive the acknowledgment (ACK) frame for the learning procedure, while simply requiring minimal memory and computation capability. Thus, the proposed TOW dynamics-based algorithm can run on resource-constrained IoT devices. We prototype the proposed algorithm on an extremely resource-constrained single-board computer, which hereafter is called the cognitive-IoT prototype. Moreover, the cognitive-IoT prototype is densely deployed in a frequently-changing radio environment for evaluation experiments. The evaluation results show that the cognitive-IoT prototype accurately and adaptively makes decisions to select the suitable channel when the real environment regularly varies. Accordingly, the successful frame ratio of the network is improved. Full article
Show Figures

Figure 1

18 pages, 1803 KiB  
Article
Machine Learning-Based Dimension Optimization for Two-Stage Precoder in Massive MIMO Systems with Limited Feedback
by Jinho Kang, Jung Hoon Lee and Wan Choi
Appl. Sci. 2019, 9(14), 2894; https://doi.org/10.3390/app9142894 - 19 Jul 2019
Cited by 8 | Viewed by 3423
Abstract
A two-stage precoder is widely considered in frequency division duplex massive multiple-input and multiple-output (MIMO) systems to resolve the channel feedback overhead problem. In massive MIMO systems, users on a network can be divided into several user groups of similar spatial antenna correlations. [...] Read more.
A two-stage precoder is widely considered in frequency division duplex massive multiple-input and multiple-output (MIMO) systems to resolve the channel feedback overhead problem. In massive MIMO systems, users on a network can be divided into several user groups of similar spatial antenna correlations. Using the two-stage precoder, the outer precoder reduces the channel dimensions mitigating inter-group interferences at the first stage, while the inner precoder eliminates the smaller dimensions of intra-group interferences at the second stage. In this case, the dimension of effective channel reduced by outer precoder is important as it leverages the inter-group interference, the intra-group interference, and the performance loss from the quantized channel feedback. In this paper, we propose the machine learning framework to find the optimal dimensions reduced by the outer precoder that maximizes the average sum rate, where the original problem is an NP-hard problem. Our machine learning framework considers the deep neural network, where the inputs are channel statistics, and the outputs are the effective channel dimensions after outer precoding. The numerical result shows that our proposed machine learning-based dimension optimization achieves the average sum rate comparable to the optimal performance using brute-forcing searching, which is not feasible in practice. Full article
Show Figures

Figure 1

16 pages, 2479 KiB  
Article
Payload-Based Traffic Classification Using Multi-Layer LSTM in Software Defined Networks
by Hyun-Kyo Lim, Ju-Bong Kim, Kwihoon Kim, Yong-Geun Hong and Youn-Hee Han
Appl. Sci. 2019, 9(12), 2550; https://doi.org/10.3390/app9122550 - 21 Jun 2019
Cited by 33 | Viewed by 7415
Abstract
Recently, with the advent of various Internet of Things (IoT) applications, a massive amount of network traffic is being generated. A network operator must provide different quality of service, according to the service provided by each application. Toward this end, many studies have [...] Read more.
Recently, with the advent of various Internet of Things (IoT) applications, a massive amount of network traffic is being generated. A network operator must provide different quality of service, according to the service provided by each application. Toward this end, many studies have investigated how to classify various types of application network traffic accurately. Especially, since many applications use temporary or dynamic IP or Port numbers in the IoT environment, only payload-based network traffic classification technology is more suitable than the classification using the packet header information as well as payload. Furthermore, to automatically respond to various applications, it is necessary to classify traffic using deep learning without the network operator intervention. In this study, we propose a traffic classification scheme using a deep learning model in software defined networks. We generate flow-based payload datasets through our own network traffic pre-processing, and train two deep learning models: 1) the multi-layer long short-term memory (LSTM) model and 2) the combination of convolutional neural network and single-layer LSTM models, to perform network traffic classification. We also execute a model tuning procedure to find the optimal hyper-parameters of the two deep learning models. Lastly, we analyze the network traffic classification performance on the basis of the F1-score for the two deep learning models, and show the superiority of the multi-layer LSTM model for network packet classification. Full article
Show Figures

Figure 1

17 pages, 688 KiB  
Article
Reinforcement Learning Based Resource Management for Network Slicing
by Yohan Kim, Sunyong Kim and Hyuk Lim
Appl. Sci. 2019, 9(11), 2361; https://doi.org/10.3390/app9112361 - 9 Jun 2019
Cited by 38 | Viewed by 6250
Abstract
Network slicing to create multiple virtual networks, called network slice, is a promising technology to enable networking resource sharing among multiple tenants for the 5th generation (5G) networks. By offering a network slice to slice tenants, network slicing supports parallel services to [...] Read more.
Network slicing to create multiple virtual networks, called network slice, is a promising technology to enable networking resource sharing among multiple tenants for the 5th generation (5G) networks. By offering a network slice to slice tenants, network slicing supports parallel services to meet the service level agreement (SLA). In legacy networks, every tenant pays a fixed and roughly estimated monthly or annual fee for shared resources according to a contract signed with a provider. However, such a fixed resource allocation mechanism may result in low resource utilization or violation of user quality of service (QoS) due to fluctuations in the network demand. To address this issue, we introduce a resource management system for network slicing and propose a dynamic resource adjustment algorithm based on reinforcement learning approach from each tenant’s point of view. First, the resource management for network slicing is modeled as a Markov Decision Process (MDP) with the state space, action space, and reward function. Then, we propose a Q-learning-based dynamic resource adjustment algorithm that aims at maximizing the profit of tenants while ensuring the QoS requirements of end-users. The numerical simulation results demonstrate that the proposed algorithm can significantly increase the profit of tenants compared to existing fixed resource allocation methods while satisfying the QoS requirements of end-users. Full article
Show Figures

Figure 1

17 pages, 4489 KiB  
Article
A Novel Neural Network-Based Method for Decoding and Detecting of the DS8-PSK Scheme in an OCC System
by Tung Lam Pham, Huy Nguyen, Trang Nguyen and Yeong Min Jang
Appl. Sci. 2019, 9(11), 2242; https://doi.org/10.3390/app9112242 - 30 May 2019
Cited by 13 | Viewed by 4296
Abstract
This paper proposes a novel method of training and applying a neural network to act as an adaptive decoder for a modulation scheme used in optical camera communication (OCC). We present a brief discussion on trending artificial intelligence applications, the contemporary ways of [...] Read more.
This paper proposes a novel method of training and applying a neural network to act as an adaptive decoder for a modulation scheme used in optical camera communication (OCC). We present a brief discussion on trending artificial intelligence applications, the contemporary ways of applying them in a wireless communication field, such as visible light communication (VLC), optical wireless communication (OWC) and OCC, and its potential contribution in the development of this research area. Furthermore, we proposed an OCC vehicular system architecture with artificial intelligence (AI) functionalities, where dimmable spatial 8-phase shift keying (DS8-PSK) is employed as one out of two modulation schemes to form a hybrid waveform. Further demonstration of simulating the blurring process on a transmitter image, as well as our proposed method of using a neural network as a decoder for DS8-PSK, is provided in detail. Finally, experimental results are given to prove the effectiveness and efficiency of the proposed method over an investigating channel condition. Full article
Show Figures

Figure 1

Back to TopTop