AI in Mobile Networks

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 August 2020) | Viewed by 8502

Special Issue Editors


E-Mail Website
Guest Editor
School of Informatics, The University of Edinburgh, Edinburgh, UK
Interests: mobile intelligence; IoT security and privacy

E-Mail Website
Guest Editor
CNR-IEIIT Corso Duca degli Abruzzi 24, 10129 Torino, Italy
Interests: user mobility characterization; mobile traffic data mining and analysis; privacy in mobile traffic datasets
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Neural networks have achieved remarkable results in image classification, speech recognition, and natural language processing. Applications of neural networks to mobile networks are, however, in their infancy. Given the exploding data consumption, which can be harnessed for neural model training purposes, exploiting the remarkable feature extraction abilities of deep learning to solve complex mobile networking problems is becoming increasingly appealing.

This Special Issue on “AI in Mobile Networks” aims to provide a snapshot of the most recent advances at the intersection of network engineering and data science, presenting novel research that brings artificial intelligence (AI) and machine learning (ML) into the mobile networking domain, and which will enable the development of future intelligent mobile networks.

Topics of interest include but are not limited to the following:

  • AI and ML for mobile traffic analysis and prediction;
  • AI and ML for mobility analysis and localization;
  • AI and ML for network management and orchestration;
  • AI and ML for radio resource scheduling;
  • ML for security and privacy in mobile networks;
  • Advances in AI for signal processing;
  • Novel mobile networking applications enabled by deep learning;
  • Systems and algorithms for ML tailored to mobile devices.

Dr. Paul Patras
Dr. Marco Fiore
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 558 KiB  
Article
World-Models for Bitrate Streaming
by Harrison Brown, Kai Fricke and Eiko Yoneki
Appl. Sci. 2020, 10(19), 6685; https://doi.org/10.3390/app10196685 - 24 Sep 2020
Cited by 1 | Viewed by 1601
Abstract
Adaptive bitrate (ABR) algorithms optimize the quality of streaming experiences for users in client-side video players, especially in unreliable or slow mobile networks. Several rule-based heuristic algorithms can achieve stable performance, but they sometimes fail to properly adapt to changing network conditions. Fluctuating [...] Read more.
Adaptive bitrate (ABR) algorithms optimize the quality of streaming experiences for users in client-side video players, especially in unreliable or slow mobile networks. Several rule-based heuristic algorithms can achieve stable performance, but they sometimes fail to properly adapt to changing network conditions. Fluctuating bandwidth may cause algorithms to default to behavior that creates a negative experience for the user. ABR algorithms can be generated with reinforcement learning, a decision-making paradigm in which an agent learns to make optimal choices through interactions with an environment. Training reinforcement learning algorithms for bitrate streaming requires building a simulator for an agent to experience interactions quickly; training an agent in the real environment is infeasible due to the long step times in real environments. This project explores using supervised learning to construct a world-model, or a learned simulator, from recorded interactions. A reinforcement learning agent that is trained inside of the learned model, rather than a simulator, can outperform rule-based heuristics. Furthermore, agents that are trained inside the learned world-model can outperform model-free agents in low sample regimes. This work highlights the potential for world-models to quickly learn simulators, and to be used for generating optimal policies. Full article
(This article belongs to the Special Issue AI in Mobile Networks)
Show Figures

Figure 1

21 pages, 4215 KiB  
Article
Comparing Deep Learning and Statistical Methods in Forecasting Crowd Distribution from Aggregated Mobile Phone Data
by Alket Cecaj, Marco Lippi, Marco Mamei and Franco Zambonelli
Appl. Sci. 2020, 10(18), 6580; https://doi.org/10.3390/app10186580 - 21 Sep 2020
Cited by 25 | Viewed by 3136
Abstract
Accurately forecasting how crowds of people are distributed in urban areas during daily activities is of key importance for the smart city vision and related applications. In this work we forecast the crowd density and distribution in an urban area by analyzing an [...] Read more.
Accurately forecasting how crowds of people are distributed in urban areas during daily activities is of key importance for the smart city vision and related applications. In this work we forecast the crowd density and distribution in an urban area by analyzing an aggregated mobile phone dataset. By comparing the forecasting performance of statistical and deep learning methods on the aggregated mobile data we show that each class of methods has its advantages and disadvantages depending on the forecasting scenario. However, for our time-series forecasting problem, deep learning methods are preferable when it comes to simplicity and immediacy of use, since they do not require a time-consuming model selection for each different cell. Deep learning approaches are also appropriate when aiming to reduce the maximum forecasting error. Statistical methods instead show their superiority in providing more precise forecasting results, but they require data domain knowledge and computationally expensive techniques in order to select the best parameters. Full article
(This article belongs to the Special Issue AI in Mobile Networks)
Show Figures

Figure 1

20 pages, 489 KiB  
Article
Can We Exploit Machine Learning to Predict Congestion over mmWave 5G Channels?
by Luis Diez, Alfonso Fernández, Muhammad Khan, Yasir Zaki and Ramón Agüero
Appl. Sci. 2020, 10(18), 6164; https://doi.org/10.3390/app10186164 - 04 Sep 2020
Cited by 4 | Viewed by 3221
Abstract
It is well known that transport protocol performance is severely hindered by wireless channel impairments. We study the applicability of Machine Learning (ML) techniques to predict congestion status of 5G access networks, in particular mmWave links. We use realistic traces, using the 3GPP [...] Read more.
It is well known that transport protocol performance is severely hindered by wireless channel impairments. We study the applicability of Machine Learning (ML) techniques to predict congestion status of 5G access networks, in particular mmWave links. We use realistic traces, using the 3GPP channel models, without being affected using legacy congestion-control solutions. We start by identifying the metrics that might be exploited from the transport layer to learn the congestion state: delay and inter-arrival time. We formally study their correlation with the perceived congestion, which we ascertain based on buffer length variation. Then, we conduct an extensive analysis of various unsupervised and supervised solutions, which are used as a benchmark. The results yield that unsupervised ML solutions can detect a large percentage of congestion situations and they could thus bring interesting possibilities when designing congestion-control solutions for next-generation transport protocols. Full article
(This article belongs to the Special Issue AI in Mobile Networks)
Show Figures

Figure 1

Back to TopTop