Advances in Wireless Communications Using Machine Learning and Deep Learning Techniques

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Electrical, Electronics and Communications Engineering".

Deadline for manuscript submissions: 30 August 2024 | Viewed by 7833

Special Issue Editor


E-Mail Website
Guest Editor
Institute INAOE, Puebla 72840, Mexico
Interests: wireless communications; machine learning; deep learning; artificial intelligence

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) is a growing technology capable of performing complex tasks similarly to human behavior. Machine learning (ML) and deep learning (DL) are the two main conceptualization portions of AI. ML is a technology capable of teaching computers to behave like human beings by training them with the help of experience and past data to perform different tasks automatically, such as predictions, estimations, etc. Due to its ability to learn from past data, ML has become a powerful tool to solve problems in different areas such as engineering, medicine, biology, astronomy, commercial applications, etc. DL is part of machine learning similar to ML but with different capabilities and approaches since it is inspired by human brain cells, called neurons. Consequently, it is also known as a deep neural network or deep neural learning. ML and DL are becoming part of new wireless communications curricula due to the necessity to solve the increased challenges in wireless communications and networks. The development of ML and DL technologies for wireless communication has expanded intensely due to the growth of the unique demands and applications in wireless communications requiring more intelligent operations and processing. As a result, ML and DL have become one of the most important trends in wireless communications research and applications. This Special Issue aims to combine the research results and industrial applications of ML and DL for wireless communications and networks. Original, previously unpublished works which are not currently under review anywhere are required in the topics of interest, including (but not limited to) the following:

  • Signal detection and channel modeling;
  • Sparse signal recovery;
  • Channel equalization;
  • Channel prediction;
  • Channel coding and decoding;
  • Modulation;
  • Energy efficiency optimization;
  • Antenna design and dynamic configuration;
  • Optimization, physical-layer, and cross-layer processing;
  • Resource allocation in cognitive radio networks;
  • Spectrum access and sharing;
  • Software-defined networking;
  • Software-defined flexible radio;
  • Protocol design;
  • Network optimization, resource management, and security;
  • Traffic and mobility prediction;
  • Massive MIMO systems;
  • Positioning and navigation systems;
  • Emerging applications, such as IoT, smart cities, and vehicular networks;
  • Sensor networks.

Prof. Dr. Gordana Jovanovic Dolecek
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

13 pages, 3210 KiB  
Article
Channel Modeling Based on Transformer Symbolic Regression for Inter-Satellite Terahertz Communication
by Yuanzhi He, Biao Sheng and Zhiqiang Li
Appl. Sci. 2024, 14(7), 2929; https://doi.org/10.3390/app14072929 - 30 Mar 2024
Viewed by 543
Abstract
Channel modeling is crucial for inter-satellite terahertz communication system design. The conventional method involves manually constructing a mathematical channel model, which is labor-intensive, and using a neural network directly as a channel model lacks interpretability. This paper introduces a channel modeling approach based [...] Read more.
Channel modeling is crucial for inter-satellite terahertz communication system design. The conventional method involves manually constructing a mathematical channel model, which is labor-intensive, and using a neural network directly as a channel model lacks interpretability. This paper introduces a channel modeling approach based on symbolic regression. It is the first time that using transformer neural networks as the implementation tool of symbolic regression to generate the mathematical channel model from the channel data directly. It can save manpower and avoid the interpretability issue of using neural networks as a channel model. The feasibility of the proposed method is verified by generating a free space path loss model from simulation data in the terahertz frequency band. Full article
Show Figures

Figure 1

17 pages, 4867 KiB  
Article
Satellite Image Categorization Using Scalable Deep Learning
by Samabia Tehsin, Sumaira Kausar, Amina Jameel, Mamoona Humayun and Deemah Khalaf Almofarreh
Appl. Sci. 2023, 13(8), 5108; https://doi.org/10.3390/app13085108 - 19 Apr 2023
Cited by 4 | Viewed by 2936
Abstract
Detecting and classifying objects from satellite images are crucial for many applications, ranging from marine monitoring to land planning, ecology to warfare, etc. Spatial and temporal information-rich satellite images are exploited in a variety of manners to solve many real-world remote sensing problems. [...] Read more.
Detecting and classifying objects from satellite images are crucial for many applications, ranging from marine monitoring to land planning, ecology to warfare, etc. Spatial and temporal information-rich satellite images are exploited in a variety of manners to solve many real-world remote sensing problems. Satellite image classification has many associated challenges. These challenges include data availability, the quality of data, the quantity of data, and data distribution. These challenges make the analysis of satellite images more challenging. A convolutional neural network architecture with a scaling method is proposed for the classification of satellite images. The scaling method can evenly scale all dimensions of depth, width, and resolution using a compound coefficient. It can be used as a preliminary task in urban planning, satellite surveillance, monitoring, etc. It can also be helpful in geo-information and maritime monitoring systems. The proposed methodology is based on an end-to-end, scalable satellite image interpretation. It uses spatial information from satellite images to categorize these into four categories. The proposed method gives encouraging and promising results on a challenging dataset with a high inter-class similarity and intra-class variation. The proposed method shows 99.64% accuracy on the RSI-CB256 dataset. Full article
Show Figures

Figure 1

18 pages, 4546 KiB  
Article
Sep-RefineNet: A Deinterleaving Method for Radar Signals Based on Semantic Segmentation
by Yongjiang Mao, Wenjuan Ren, Xipeng Li, Zhanpeng Yang and Wei Cao
Appl. Sci. 2023, 13(4), 2726; https://doi.org/10.3390/app13042726 - 20 Feb 2023
Cited by 1 | Viewed by 2380
Abstract
With the progress of signal processing technology and the emergence of new system radars, the space electromagnetic environment becomes more and more complex, which puts forward higher requirements for the deinterleaving method of radar signals. Traditional signal deinterleaving algorithms rely heavily on manual [...] Read more.
With the progress of signal processing technology and the emergence of new system radars, the space electromagnetic environment becomes more and more complex, which puts forward higher requirements for the deinterleaving method of radar signals. Traditional signal deinterleaving algorithms rely heavily on manual experience threshold and have poor robustness. To address this problem, we designed an intelligent radar signal deinterleaving algorithm that was completed by encoding the frequency characteristic matrix and semantic segmentation network, named Sep-RefineNet. The frequency characteristic matrix can well construct the semantic features of different pulse streams of radar signals. The Sep-RefineNet semantic segmentation network can complete pixel-level segmentation of the frequency characteristic matrix and finally uses position decoding and verification to obtain the position in the original pulse stream to complete radar signals deinterleaving. The proposed method avoids the processing of threshold judgment and pulse sequence search in traditional methods. The results of the experiment show that this algorithm improves the deinterleaving accuracy and has a good against-noise ability of aliasing pulses and missing pulses. Full article
Show Figures

Figure 1

Review

Jump to: Research

31 pages, 3954 KiB  
Review
A Review on Congestion Mitigation Techniques in Ultra-Dense Wireless Sensor Networks: State-of-the-Art Future Emerging Artificial Intelligence-Based Solutions
by Abdullah Umar, Zubair Khalid, Mohammed Ali, Mohammed Abazeed, Ali Alqahtani, Rahat Ullah and Hashim Safdar
Appl. Sci. 2023, 13(22), 12384; https://doi.org/10.3390/app132212384 - 16 Nov 2023
Cited by 1 | Viewed by 1351
Abstract
The Internet of Things (IoT) and wireless sensor networks (WSNs) have evolved rapidly due to technological breakthroughs. WSNs generate high traffic due to the growing number of sensor nodes. Congestion is one of several problems caused by the huge amount of data in [...] Read more.
The Internet of Things (IoT) and wireless sensor networks (WSNs) have evolved rapidly due to technological breakthroughs. WSNs generate high traffic due to the growing number of sensor nodes. Congestion is one of several problems caused by the huge amount of data in WSNs. When wireless network resources are limited and IoT devices require more and more resources, congestion occurs in extremely dense WSN-based IoT networks. Reduced throughput, reduced network capacity, and reduced energy efficiency within WSNs are all effects of congestion. These consequences eventually lead to network outages due to underutilized network resources, increased network operating costs, and significantly degraded quality of service (QoS). Therefore, it is critical to deal with congestion in WSN-based IoT networks. Researchers have developed a number of approaches to address this problem, with new solutions based on artificial intelligence (AI) standing out. This research examines how new AI-based algorithms contribute to congestion mitigation in WSN-based IoT networks and the various congestion mitigation strategies that have helped reduce congestion. This study also highlights the limitations of AI-based solutions, including where and why they are used in WSNs, and a comparative study of the current literature that makes this study novel. The study concludes with a discussion of its significance and potential future study topics. The topic of congestion reduction in ultra-dense WSN-based IoT networks, as well as the current state of the art and emerging future solutions, demonstrates their significant expertise in reducing WSN congestion. These solutions contribute to network optimization, throughput enhancement, quality of service improvement, network capacity expansion, and overall WSN efficiency improvement. Full article
Show Figures

Figure 1

Back to TopTop