Topic Editors

Department of Electronic Engineering, National Yunlin University of Science and Technology, Douliu, Taiwan
Department of Electrical Engineering, National Dong Hwa University, Hualien 974, Taiwan
Prof. Dr. Adam W. Skorek
Department of Electrical and Computer Engineering, University of Québec at Trois-Rivières, QC G8Z 4M3, Canada
Department of Toxicology, Poznan University of Medical Sciences, Dojazd 30, 60-631 Poznań, Poland
Faculty of Electronics, Telecommunications and Information Technology, Universitatea POLITEHNICA din București, 060042 București, Romania
Prof. Dr. Sheng-Lyang Jang
Department of Electronic Engineering, National Taiwan University of Sience and Technology, Taipei, Taiwan
College of Photonic and Electronic Engineering, Fujian Normal University, Fujian, China
School of Computer Science and Engineering, The University of New South Wales, Sydney, NSW 2052, Australia
Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norwegian University of Science and Technology, 7034 Trondheim, Norway
Non Linear Dynamics Research Lab, University School of Basic & Applied Sciences, Guru Gobind Singh Indraprastha University, New Delhi, India

Wireless Communications and Edge Computing in 6G

Abstract submission deadline
closed (30 September 2022)
Manuscript submission deadline
closed (30 December 2022)
Viewed by
51256

Topic Information

Dear Colleagues,

Soon, 6G wireless communications will be the new standard, providing not only huge bitrates (terabits per second) and lower latency (less than 1 ms), but also enabling a “hyper-connected” paradigm that will connect users and things. Artificial Intelligence (AI) will play a major role within 6G, and thus more computation and communication resources will be consumed; their optimization is a must. There are an increasing number of new applications such as pervasive edge intelligence, ultra-massive machine-type communications, extremely reliable ultra low-latency communications, holographic telepresence, eHealth and Parkinson's disease applications, pervasive connectivity in smart environments, UAV, autonomous driving, Internet of Things (IoT), robotics, industry 4.0 and massive robotics, massive unmanned mobility in three dimensions, and augmented reality (AR) and virtual reality (VR). Many future data-intensive applications and services such as pervasive edge intelligence, holographic rendering, high-precision manufacturing, ultra-massive machine-type communications, and MR-based gaming are expected to demand a higher data rate (+1 Tbps) and extremely low delay (0.1 ms).

The objective of this Topic is to define the framework of wireless communications and edge computing in 6G, its services, and breakthrough technologies. We are soliciting original contributions that have not been previously published and are not currently under consideration by any other journals. Particular emphasis is placed on radical new concepts and ideas. The topics of interest include, but are not limited to the following:

Dr. Wencheng Lai
Prof. Dr. Han-Chieh Chao
Prof. Dr. Adam W. Skorek
Dr. Małgorzata Kujawska
Prof. Dr. Lidia Dobrescu
Prof. Dr. Sheng-Lyang Jang
Prof. Dr. Yi Wu
Prof. Dr. Vijayakumar Varadarajan
Dr. Hao Wang
Prof. Dr. Rashmi Bhardwaj
Topic Editors

Keywords

  • edge computing applications for 6G wireless communications
  • 6G antenna design for massive mimo
  • 6G front-end transmitter and receiver circuits design
  • Thz communications for 6G applications
  • artificial intelligence of thingS for 6G
  • mobile communications and network architectures for a 6G system
  • edge and cloud computing for robotics, UAV, AR/VR and automation
  • AI for edge computing of electric vehicle

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Automation
automation
- 2.9 2020 20.6 Days CHF 1000
Future Internet
futureinternet
2.8 7.1 2009 13.1 Days CHF 1600
Robotics
robotics
2.9 6.7 2012 17.7 Days CHF 1800
Sensors
sensors
3.4 7.3 2001 16.8 Days CHF 2600
Smart Cities
smartcities
7.0 11.2 2018 25.8 Days CHF 2000

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (19 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
26 pages, 1368 KiB  
Article
AoI-Aware Optimization of Service Caching-Assisted Offloading and Resource Allocation in Edge Cellular Networks
by Jialiang Feng and Jie Gong
Sensors 2023, 23(6), 3306; https://doi.org/10.3390/s23063306 - 21 Mar 2023
Cited by 1 | Viewed by 1922
Abstract
The rapid development of the Internet of Things (IoT) has led to computational offloading at the edge; this is a promising paradigm for achieving intelligence everywhere. As offloading can lead to more traffic in cellular networks, cache technology is used to alleviate the [...] Read more.
The rapid development of the Internet of Things (IoT) has led to computational offloading at the edge; this is a promising paradigm for achieving intelligence everywhere. As offloading can lead to more traffic in cellular networks, cache technology is used to alleviate the channel burden. For example, a deep neural network (DNN)-based inference task requires a computation service that involves running libraries and parameters. Thus, caching the service package is necessary for repeatedly running DNN-based inference tasks. On the other hand, as the DNN parameters are usually trained in distribution, IoT devices need to fetch up-to-date parameters for inference task execution. In this work, we consider the joint optimization of computation offloading, service caching, and the AoI metric. We formulate a problem to minimize the weighted sum of the average completion delay, energy consumption, and allocated bandwidth. Then, we propose the AoI-aware service caching-assisted offloading framework (ASCO) to solve it, which consists of the method of Lagrange multipliers with the KKT condition-based offloading module (LMKO), the Lyapunov optimization-based learning and update control module (LLUC), and the Kuhn–Munkres (KM) algorithm-based channel-division fetching module (KCDF). The simulation results demonstrate that our ASCO framework achieves superior performance in regard to time overhead, energy consumption, and allocated bandwidth. It is verified that our ASCO framework not only benefits the individual task but also the global bandwidth allocation. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

16 pages, 4519 KiB  
Communication
A Software and Hardware Cooperation Method for Full Nyquist Rate Transmission Symbol Synchronization at E-Band Wireless Communication
by Fei Wang, Zhiqun Cheng, Hang Li and Dan Zhu
Sensors 2022, 22(22), 8924; https://doi.org/10.3390/s22228924 - 18 Nov 2022
Viewed by 1694
Abstract
Compared with the conventional pulse-shaping transmission system, the full Nyquist rate transmission system with large bandwidth is sensitive to the sampling phase. It has only one sample available in one symbol period and is easily interfered by the channel, which does not allow [...] Read more.
Compared with the conventional pulse-shaping transmission system, the full Nyquist rate transmission system with large bandwidth is sensitive to the sampling phase. It has only one sample available in one symbol period and is easily interfered by the channel, which does not allow the traditional symbol synchronization methods to be used directly. Another challenge is that the resource utilization for sampling data processing needs to be minimized due to the excessive consumption of the high data throughput in hardware resources. To solve these issues, we propose a symbol synchronization method based on the combination of software and hardware, which mainly includes two processes: Obtaining the initial phase by using Chirp signal and MOE criterion before communication; tracking the real-time phase using an on-line gradient table and frequency domain analysis of known data during communication. Both processes are proceeded with a phase adjustable clock. Through hardware verification, the sampling phase can be kept close to the optimal phase, thus ensuring the accuracy of the sampling data, and improving the system BER performance. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

15 pages, 2296 KiB  
Article
An Edge-Based Architecture for Offloading Model Predictive Control for UAVs
by Achilleas Santi Seisa, Sumeet Gajanan Satpute, Björn Lindqvist and George Nikolakopoulos
Robotics 2022, 11(4), 80; https://doi.org/10.3390/robotics11040080 - 6 Aug 2022
Cited by 8 | Viewed by 3175
Abstract
Thanks to the development of 5G networks, edge computing has gained popularity in several areas of technology in which the needs for high computational power and low time delays are essential. These requirements are indispensable in the field of robotics, especially when we [...] Read more.
Thanks to the development of 5G networks, edge computing has gained popularity in several areas of technology in which the needs for high computational power and low time delays are essential. These requirements are indispensable in the field of robotics, especially when we are thinking in terms of real-time autonomous missions in mobile robots. Edge computing will provide the necessary resources in terms of computation and storage, while 5G technologies will provide minimal latency. High computational capacity is crucial in autonomous missions, especially for cases in which we are using computationally demanding high-level algorithms. In the case of Unmanned Aerial Vehicles (UAVs), the onboard processors usually have limited computational capabilities; therefore, it is necessary to offload some of these tasks to the cloud or edge, depending on the time criticality of the application. Especially in the case of UAVs, the requirement to have large payloads to cover the computational needs conflicts with other payload requirements, reducing the overall flying time and hindering autonomous operations from a regulatory perspective. In this article, we propose an edge-based architecture for autonomous UAV missions in which we offload the high-level control task of the UAV’s trajectory to the edge in order to take advantage of the available resources and push the Model Predictive Controller (MPC) to its limits. Additionally, we use Kubernetes to orchestrate our application, which runs on the edge and presents multiple experimental results that prove the efficacy of the proposed novel scheme. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

24 pages, 623 KiB  
Article
Enhanced Geographic Routing with One- and Two-Hop Movement Information in Opportunistic Ad Hoc Networks
by Mohd-Yaseen Mir, Hengbing Zhu and Chih-Lin Hu
Future Internet 2022, 14(7), 214; https://doi.org/10.3390/fi14070214 - 20 Jul 2022
Cited by 2 | Viewed by 2357
Abstract
Opportunistic ad hoc networks are characterized by intermittent and infrastructure-less connectivity among mobile nodes. Because of the lack of up-to-date network topology information and frequent link failures, geographic routing utilizes location information and adopts the store–carry–forward data delivery model to relay messages in [...] Read more.
Opportunistic ad hoc networks are characterized by intermittent and infrastructure-less connectivity among mobile nodes. Because of the lack of up-to-date network topology information and frequent link failures, geographic routing utilizes location information and adopts the store–carry–forward data delivery model to relay messages in a delay-tolerant manner. This paper proposes a message-forwarding policy based on movement patterns (MPMF). First, one- and two-hop location information in a geographic neighborhood is exploited to select relay nodes moving closer to a destination node. Message-forwarding decisions are made by referring to selected relay nodes’ weight values obtained by calculating the contact frequency of each node with the destination node. Second, when relays in the vicinity of a message-carrying node are not qualified due to the sparse node density and nodal motion status, the destination’s movement and the location information of a one-hop relay are jointly utilized to improve the message-forwarding decision. If the one-hop relay is not closer to the destination node or moving away from it, its centrality value in the network is used instead. Based on both synthetic and real mobility scenarios, the simulation results show that the proposed policy performs incomparable efforts to some typical routing policies, such as Epidemic, PRoPHETv2, temporal closeness and centrality-based (TCCB), transient community-based (TC), and geographic-based spray-and-relay (GSaR) routing policies. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

20 pages, 2005 KiB  
Article
A Long Short-Term Memory Network-Based Radio Resource Management for 5G Network
by Kavitha Rani Balmuri, Srinivas Konda, Wen-Cheng Lai, Parameshachari Bidare Divakarachari, Kavitha Malali Vishveshwarappa Gowda and Hemalatha Kivudujogappa Lingappa
Future Internet 2022, 14(6), 184; https://doi.org/10.3390/fi14060184 - 14 Jun 2022
Cited by 14 | Viewed by 3514
Abstract
Nowadays, the Long-Term Evolution-Advanced system is widely used to provide 5G communication due to its improved network capacity and less delay during communication. The main issues in the 5G network are insufficient user resources and burst errors, because it creates losses in data [...] Read more.
Nowadays, the Long-Term Evolution-Advanced system is widely used to provide 5G communication due to its improved network capacity and less delay during communication. The main issues in the 5G network are insufficient user resources and burst errors, because it creates losses in data transmission. In order to overcome this, an effective Radio Resource Management (RRM) is required to be developed in the 5G network. In this paper, the Long Short-Term Memory (LSTM) network is proposed to develop the radio resource management in the 5G network. The proposed LSTM-RRM is used for assigning an adequate power and bandwidth to the desired user equipment of the network. Moreover, the Grid Search Optimization (GSO) is used for identifying the optimal hyperparameter values for LSTM. In radio resource management, a request queue is used to avoid the unwanted resource allocation in the network. Moreover, the losses during transmission are minimized by using frequency interleaving and guard level insertion. The performance of the LSTM-RRM method has been analyzed in terms of throughput, outage percentage, dual connectivity, User Sum Rate (USR), Threshold Sum Rate (TSR), Outdoor Sum Rate (OSR), threshold guaranteed rate, indoor guaranteed rate, and outdoor guaranteed rate. The indoor guaranteed rate of LSTM-RRM for 1400 m of building distance improved up to 75.38% compared to the existing QOC-RRM. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

15 pages, 1523 KiB  
Article
RAEN: Rate Adaptation for Effective Nodes in Backscatter Networks
by Jumin Zhao, Qi Liu, Dengao Li, Qiang Wang and Ruiqin Bai
Sensors 2022, 22(12), 4322; https://doi.org/10.3390/s22124322 - 7 Jun 2022
Cited by 1 | Viewed by 1738
Abstract
A backscatter network, as a key enabling technology for interconnecting plentiful IoT sensing devices, can be applicable to a variety of interesting applications, e.g., wireless sensing and motion tracking. In these scenarios, the vital information-carrying effective nodes always suffer from an extremely low [...] Read more.
A backscatter network, as a key enabling technology for interconnecting plentiful IoT sensing devices, can be applicable to a variety of interesting applications, e.g., wireless sensing and motion tracking. In these scenarios, the vital information-carrying effective nodes always suffer from an extremely low individual reading rate, which results from both unpredictable channel conditions and intense competition from other nodes. In this paper, we propose a rate-adaptation algorithm for effective nodes (RAEN), to improve the throughput of effective nodes, by allowing them to transmit exclusively and work in an appropriate data rate. RAEN works in two stages: (1) RAEN exclusively extracts effective nodes with an identification module and selection module; (2) then, RAEN leverages a trigger mechanism, combined with a random forest-based classifier, to predict the overall optimal rate. As RAEN is fully compatible with the EPC C1G2 standard, we implement the experiment through a commercial reader and multiple RFID tags. Comprehensive experiments show that RAEN improves the throughput of effective nodes by 3×, when 1/6 of the nodes are effective, compared with normal reading. What is more, the throughput of RAEN is better than traditional rate-adaptation methods. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

14 pages, 11465 KiB  
Communication
A Dual-Band 47-dB Dynamic Range 0.5-dB/Step DPA with Dual-Path Power-Combining Structure for NB-IoT
by Reza E. Rad, Sungjin Kim, Younggun Pu, Yeongjae Jung, Hyungki Huh, Joonmo Yoo, Seokkee Kim and Kang-Yoon Lee
Sensors 2022, 22(9), 3493; https://doi.org/10.3390/s22093493 - 4 May 2022
Viewed by 2030
Abstract
This paper presents a digital power amplifier (DPA) with a 43-dB dynamic range and 0.5-dB/step gain steps for a narrow-band Internet of Things (NBIoT) transceiver application. The proposed DPA is implemented in a dual-band architecture for both the low band and high band [...] Read more.
This paper presents a digital power amplifier (DPA) with a 43-dB dynamic range and 0.5-dB/step gain steps for a narrow-band Internet of Things (NBIoT) transceiver application. The proposed DPA is implemented in a dual-band architecture for both the low band and high band of the frequency coverage in an NBIoT application. The proposed DPA is implemented in two individual paths, power amplification, and power attenuation, to provide a wide range when both paths are implemented. To perform the fine control over the gain steps, ten fully differential cascode power amplifier cores, in parallel with a binary sizing, are used to amplify power and enable signals and provide fine gain steps. For the attenuation path, ten steps of attenuated signal level are provided which are controlled with ten power cores, similar to the power amplification path in parallel but with a fixed, small size for the cores. The proposed implementation is finalized with output custom-made baluns at the output. The technique of using parallel controlled cores provides a fine power adjustability by using a small area on the die where the NBIoT is fabricated in a 65-nm CMOS technology. Experimental results show a dynamic range of 47 dB with 0.5-dB fine steps are also available. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

25 pages, 1723 KiB  
Article
Joint Optimization for Mobile Edge Computing-Enabled Blockchain Systems: A Deep Reinforcement Learning Approach
by Zhuoer Hu, Hui Gao, Taotao Wang, Daoqi Han and Yueming Lu
Sensors 2022, 22(9), 3217; https://doi.org/10.3390/s22093217 - 22 Apr 2022
Cited by 8 | Viewed by 2867
Abstract
A mobile edge computing (MEC)-enabled blockchain system is proposed in this study for secure data storage and sharing in internet of things (IoT) networks, with the MEC acting as an overlay system to provide dynamic computation offloading services. Considering latency-critical, resource-limited, and dynamic [...] Read more.
A mobile edge computing (MEC)-enabled blockchain system is proposed in this study for secure data storage and sharing in internet of things (IoT) networks, with the MEC acting as an overlay system to provide dynamic computation offloading services. Considering latency-critical, resource-limited, and dynamic IoT scenarios, an adaptive system resource allocation and computation offloading scheme is designed to optimize the scalability performance for MEC-enabled blockchain systems, wherein the scalability is quantified as MEC computational efficiency and blockchain system throughput. Specifically, we jointly optimize the computation offloading policy and block generation strategy to maximize the scalability of MEC-enabled blockchain systems and meanwhile guarantee data security and system efficiency. In contrast to existing works that ignore frequent user movement and dynamic task requirements in IoT networks, the joint performance optimization scheme is formulated as a Markov decision process (MDP). Furthermore, we design a deep deterministic policy gradient (DDPG)-based algorithm to solve the MDP problem and define the multiple and variable number of consecutive time slots as a decision epoch to conduct model training. Specifically, DDPG can solve an MDP problem with a continuous action space and it only requires a straightforward actor–critic architecture, making it suitable for tackling the dynamics and complexity of the MEC-enabled blockchain system. As demonstrated by simulations, the proposed scheme can achieve performance improvements over the deep Q network (DQN)-based scheme and some other greedy schemes in terms of long-term transactional throughput. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

21 pages, 4135 KiB  
Article
The Usage of ANN for Regression Analysis in Visible Light Positioning Systems
by Neha Chaudhary, Othman Isam Younus, Luis Nero Alves, Zabih Ghassemlooy and Stanislav Zvanovec
Sensors 2022, 22(8), 2879; https://doi.org/10.3390/s22082879 - 8 Apr 2022
Cited by 5 | Viewed by 2364
Abstract
In this paper, we study the design aspects of an indoor visible light positioning (VLP) system that uses an artificial neural network (ANN) for positioning estimation by considering a multipath channel. Previous results usually rely on the simplistic line of sight model with [...] Read more.
In this paper, we study the design aspects of an indoor visible light positioning (VLP) system that uses an artificial neural network (ANN) for positioning estimation by considering a multipath channel. Previous results usually rely on the simplistic line of sight model with limited validity. The study considers the influence of noise as a performance indicator for the comparison between different design approaches. Three different ANN algorithms are considered, including Levenberg–Marquardt, Bayesian regularization, and scaled conjugate gradient algorithms, to minimize the positioning error (εp) in the VLP system. The ANN design is optimized based on the number of neurons in the hidden layers, the number of training epochs, and the size of the training set. It is shown that, the ANN with Bayesian regularization outperforms the traditional received signal strength (RSS) technique using the non-linear least square estimation for all values of signal to noise ratio (SNR). Furthermore, in the inner region, which includes the area of the receiving plane within the transmitters, the positioning accuracy is improved by 43, 55, and 50% for the SNR of 10, 20, and 30 dB, respectively. In the outer region, which is the remaining area within the room, the positioning accuracy is improved by 57, 32, and 6% for the SNR of 10, 20, and 30 dB, respectively. Moreover, we also analyze the impact of different training dataset sizes in ANN, and we show that it is possible to achieve a minimum εp of 2 cm for 30 dB of SNR using a random selection scheme. Finally, it is observed that εp is low even for lower values of SNR, i.e., εp values are 2, 11, and 44 cm for the SNR of 30, 20, and 10 dB, respectively. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

18 pages, 514 KiB  
Article
Resource Allocation and 3D Deployment of UAVs-Assisted MEC Network with Air-Ground Cooperation
by Jinming Huang, Sijie Xu, Jun Zhang and Yi Wu
Sensors 2022, 22(7), 2590; https://doi.org/10.3390/s22072590 - 28 Mar 2022
Cited by 7 | Viewed by 2570
Abstract
Equipping an unmanned aerial vehicle (UAV) with a mobile edge computing (MEC) server is an interesting technique for assisting terminal devices (TDs) to complete their delay sensitive computing tasks. In this paper, we investigate a UAV-assisted MEC network with air–ground cooperation, where both [...] Read more.
Equipping an unmanned aerial vehicle (UAV) with a mobile edge computing (MEC) server is an interesting technique for assisting terminal devices (TDs) to complete their delay sensitive computing tasks. In this paper, we investigate a UAV-assisted MEC network with air–ground cooperation, where both UAV and ground access point (GAP) have a direct link with TDs and undertake computing tasks cooperatively. We set out to minimize the maximum delay among TDs by optimizing the resource allocation of the system and by three-dimensional (3D) deployment of UAVs. Specifically, we propose an iterative algorithm by jointly optimizing UAV–TD association, UAV horizontal location, UAV vertical location, bandwidth allocation, and task split ratio. However, the overall optimization problem will be a mixed-integer nonlinear programming (MINLP) problem, which is hard to deal with. Thus, we adopt successive convex approximation (SCA) and block coordinate descent (BCD) methods to obtain a solution. The simulation results have shown that our proposed algorithm is efficient and has a great performance compared to other benchmark schemes. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

10 pages, 4580 KiB  
Communication
Chip Design of an All-Digital Frequency Synthesizer with Reference Spur Reduction Technique for Radar Sensing
by Wen-Cheng Lai
Sensors 2022, 22(7), 2570; https://doi.org/10.3390/s22072570 - 27 Mar 2022
Cited by 12 | Viewed by 3369
Abstract
5.2-GHz all-digital frequency synthesizer implemented proposed reference spur reducing with the tsmc 0.18 µm CMOS technology is proposed. It can be used for radar equipped applications and radar-communication control. It provides one ration frequency ranged from 4.68 GHz to 5.36 GHz for the [...] Read more.
5.2-GHz all-digital frequency synthesizer implemented proposed reference spur reducing with the tsmc 0.18 µm CMOS technology is proposed. It can be used for radar equipped applications and radar-communication control. It provides one ration frequency ranged from 4.68 GHz to 5.36 GHz for the local oscillator in RF frontend circuits. Adopting a phase detector that only delivers phase error raw data when phase error is investigated and reduces the updating frequency for DCO handling code achieves a decreased reference spur. Since an all-digital phase-locked loop is designed, the prototype not only optimized the chip dimensions, but also precludes the influence of process shrinks and has the advantage of noise immunity. The elements of novelties of this article are low phase noise and low power consumption. With 1.8 V supply voltage and locking at 5.22 GHz, measured results find that the output signal power is −8.03 dBm, the phase noise is −110.74 dBc/Hz at 1 MHz offset frequency and the power dissipation is 16.2 mW, while the die dimensions are 0.901 × 0.935 mm2. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

19 pages, 2848 KiB  
Article
Interference Management with Reflective In-Band Full-Duplex NOMA for Secure 6G Wireless Communication Systems
by Rabia Khan, Nyasha Tsiga and Rameez Asif
Sensors 2022, 22(7), 2508; https://doi.org/10.3390/s22072508 - 25 Mar 2022
Cited by 12 | Viewed by 2970
Abstract
The electromagnetic spectrum is used as a medium for modern wireless communication. Most of the spectrum is being utilized by the existing communication system. For technological breakthroughs and fulfilling the demands of better utilization of such natural resources, a novel Reflective In-Band Full-Duplex [...] Read more.
The electromagnetic spectrum is used as a medium for modern wireless communication. Most of the spectrum is being utilized by the existing communication system. For technological breakthroughs and fulfilling the demands of better utilization of such natural resources, a novel Reflective In-Band Full-Duplex (R-IBFD) cooperative communication scheme is proposed in this article that involves Full-Duplex (FD) and Non-Orthogonal Multiple Access (NOMA) technologies. The proposed R-IBFD provides efficient use of spectrum with better system parameters including Secrecy Outage Probability (SOP), throughput, data rate and secrecy capacity to fulfil the requirements of a smart city for 6th Generation (6thG or 6G). The proposed system targets the requirement of new algorithms that contribute towards better change and bring the technological revolution in the requirements of 6G. In this article, the proposed R-IBFD mainly contributes towards co-channel interference and security problem. The In-Band Full-Duplex mode devices face higher co-channel interference in between their own transmission and receiving antenna. R-IBFD minimizes the effect of such interference and assists in the security of a required wireless communication system. For a better understanding of the system contribution, the improvement of secrecy capacity and interference with R-IBFD is discussed with the help of SOP derivation, equations and simulation results. A machine learning genetic algorithm is one of the optimization tools which is being used to maximize the secrecy capacity. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

13 pages, 8253 KiB  
Communication
A 0.617–2.7 GHz Highly Linear High-Power Dual Port 15 Throws Antenna Switch Module (DP15T-ASM) with Branched-Antenna Technique and Termination Mode
by Reza E. Rad, Kyung-Duk Choi, Sung-Jin Kim, Young-Gun Pu, Yeon-Jae Jung, Hyung-Ki Huh, Joon-Mo Yoo, Seok-Kee Kim and Kang-Yoon Lee
Sensors 2022, 22(6), 2276; https://doi.org/10.3390/s22062276 - 15 Mar 2022
Cited by 1 | Viewed by 2452
Abstract
This paper presents a Dual-Port-15-Throw (DP15T) antenna switch module (ASM) Radio Frequency (RF) switch implemented by a branched antenna technique which has a high linearity for wireless communications and various frequency bands, including a low- frequency band of 617–960 MHz, a mid-frequency band [...] Read more.
This paper presents a Dual-Port-15-Throw (DP15T) antenna switch module (ASM) Radio Frequency (RF) switch implemented by a branched antenna technique which has a high linearity for wireless communications and various frequency bands, including a low- frequency band of 617–960 MHz, a mid-frequency band of 1.4–2.2 GHz, and a high-frequency band of 2.3–2.7 GHz. To obtain an acceptable Insertion Loss (IL) and provide a consistent input for each throw, a branched antenna technique is proposed that distributes a unified magnetic field at the inputs of the throws. The other role of the proposed antenna is to increase the inductance effects for the closer ports to the antenna pad in order to decrease IL at higher frequencies. The module is enhanced by two termination modes for each antenna path to terminate the antenna when the switch is not operating. The module is fabricated in the silicon-on-insulator CMOS process. The measurement results show a maximum IMD2 and IMD3 of −100 dBm, while for the second and third harmonics the maximum value is −89 dBc. The module operates with a maximum power handling of 35 dBm. Experimental results show a maximum IL of 0.34 and 0.92 dB and a minimum isolation of 49 dB and 35.5 dB at 0.617 GHz and 2.7 GHz frequencies, respectively. The module is implemented in a compact way to occupy an area of 0.74 mm2. The termination modes show a second harmonic of 75 dBc, which is desirable. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

17 pages, 3666 KiB  
Article
Two-Stage Multiarmed Bandit for Reconfigurable Intelligent Surface Aided Millimeter Wave Communications
by Ehab Mahmoud Mohamed, Sherief Hashima, Kohei Hatano and Saud Alhajaj Aldossari
Sensors 2022, 22(6), 2179; https://doi.org/10.3390/s22062179 - 10 Mar 2022
Cited by 16 | Viewed by 2878
Abstract
A reconfigurable intelligent surface (RIS) is a promising technology that can extend short-range millimeter wave (mmWave) communications coverage. However, phase shifts (PSs) of both mmWave transmitter (TX) and RIS antenna elements need to be optimally adjusted to effectively cover a mmWave user. This [...] Read more.
A reconfigurable intelligent surface (RIS) is a promising technology that can extend short-range millimeter wave (mmWave) communications coverage. However, phase shifts (PSs) of both mmWave transmitter (TX) and RIS antenna elements need to be optimally adjusted to effectively cover a mmWave user. This paper proposes codebook-based phase shifters for mmWave TX and RIS to overcome the difficulty of estimating their mmWave channel state information (CSI). Moreover, to adjust the PSs of both, an online learning approach in the form of a multiarmed bandit (MAB) game is suggested, where a nested two-stage stochastic MAB strategy is proposed. In the proposed strategy, the PS vector of the mmWave TX is adjusted in the first MAB stage. Based on it, the PS vector of the RIS is calibrated in the second stage and vice versa over the time horizon. Hence, we leverage and implement two standard MAB algorithms, namely Thompson sampling (TS) and upper confidence bound (UCB). Simulation results confirm the superior performance of the proposed nested two-stage MAB strategy; in particular, the nested two-stage TS nearly matches the optimal performance. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

23 pages, 10535 KiB  
Article
Insertion Loss and Phase Compensation Using a Circular Slot Via-Hole in a Compact 5G Millimeter Wave (mmWave) Butler Matrix at 28 GHz
by Noorlindawaty Md Jizat, Zubaida Yusoff, Azah Syafiah Mohd Marzuki, Norsiha Zainudin and Yoshihide Yamada
Sensors 2022, 22(5), 1850; https://doi.org/10.3390/s22051850 - 26 Feb 2022
Cited by 5 | Viewed by 2866
Abstract
Fifth generation (5G) technology aims to provide high peak data rates, increased bandwidth, and supports a 1 millisecond roundtrip latency at millimeter wave (mmWave). However, higher frequency bands in mmWave comes with challenges including poor propagation characteristics and lossy structure. The beamforming Butler [...] Read more.
Fifth generation (5G) technology aims to provide high peak data rates, increased bandwidth, and supports a 1 millisecond roundtrip latency at millimeter wave (mmWave). However, higher frequency bands in mmWave comes with challenges including poor propagation characteristics and lossy structure. The beamforming Butler matrix (BM) is an alternative design intended to overcome these limitations by controlling the phase and amplitude of the signal, which reduces the path loss and penetration losses. At the mmWave, the wavelength becomes smaller, and the BM planar structure is intricate and faces issues of insertion losses and size due to the complexity. To address these issues, a dual-layer substrate is connected through the via, and the hybrids are arranged side by side. The dual-layer structure circumvents the crossover elements, while the strip line, hybrids, and via-hole are carefully designed on each BM element. The internal design of BM features a compact size and low-profile structure, with dimensions of 23.26 mm × 28.92 mm (2.17 λ0  ×  2.69 λ0), which is ideally suited for the 5G mmWave communication system. The designed BM measured results show return losses, Sii and Sjj, of less than −10 dB, transmission amplitude of −8 ± 2 dB, and an acceptable range of output phase at 28 GHz. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

19 pages, 3977 KiB  
Article
An Interference-Managed Hybrid Clustering Algorithm to Improve System Throughput
by Naureen Farhan and Safdar Rizvi
Sensors 2022, 22(4), 1598; https://doi.org/10.3390/s22041598 - 18 Feb 2022
Cited by 6 | Viewed by 2066
Abstract
In the current smart era of 5G, cellular devices and mobile data have increased exponentially. The conventional network deployment and protocols do not fulfill the ever-increasing demand for mobile data traffic. Therefore, ultra-dense networks have widely been suggested in the recent literature. However, [...] Read more.
In the current smart era of 5G, cellular devices and mobile data have increased exponentially. The conventional network deployment and protocols do not fulfill the ever-increasing demand for mobile data traffic. Therefore, ultra-dense networks have widely been suggested in the recent literature. However, deploying an ultra-dense network (UDN) under macro cells leads to severe interference management challenges. Although various centralized and distributed clustering methods have been used in most research work, the issue of increased interference persists. This paper proposes a joint small cell power control algorithm (SPC) and interference-managed hybrid clustering (IMHC) scheme, to resolve the issue of co-tier and cross-tier interference in the small cell base station cluster tiers. The small cell base stations (SBSs) are categorized based on their respective transmitting power, as high-power SBSs (HSBSs) and low-power SBSs (LSBSs). The simulation results show that by implementing the IMHC algorithm for SBSs in a three-tier heterogeneous network, the system throughput is improved with reduced interference. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

12 pages, 4161 KiB  
Communication
Performance Comparison of Repetition Coding MIMO Optical Wireless Communications with Distinct Light Beams
by Jupeng Ding, Chih-Lin I, Jintao Wang, Hui Yang and Lili Wang
Sensors 2022, 22(3), 1256; https://doi.org/10.3390/s22031256 - 7 Feb 2022
Cited by 2 | Viewed by 1854
Abstract
In current optical wireless communications (OWC) oriented research works, the various multiple input multiple output (MIMO) techniques are introduced and utilized to enhance the coverage performance. Objectively, this Lambertian light beam based MIMO research paradigm neglects the light beam diversity and the potential [...] Read more.
In current optical wireless communications (OWC) oriented research works, the various multiple input multiple output (MIMO) techniques are introduced and utilized to enhance the coverage performance. Objectively, this Lambertian light beam based MIMO research paradigm neglects the light beam diversity and the potential performance gains. In this work, the distinct non-Lambertian light beams of commercially available light emitting diodes (LEDs) were adopted to configure MIMO OWC links. Specifically, the homogenous and heterogeneous non-Lambertian MIMO configurations were constituted in typical indoor scenarios. Moreover, applying the repetition coding (RC) MIMO algorithm with low complexity, a spatial coverage performance comparison was made between the several above mentioned non-Lambertian configurations and the well-discussed Lambertian MIMO configuration. Numerical results illustrate that the homogeneous NSPW light beam configuration could provide a more than 30 dB average signal to noise ratio (SNR), while the achievable average SNR of the heterogeneous light beam configuration was up to 28.77 dB. On the other side, the counterpart of the Lambertian configuration achieved just about 27.00 dB. Objectively, this work paves the fundamental and essential way for the further design and optimization of MIMO OWC in this novel light beam dimension. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

19 pages, 921 KiB  
Article
Joint Beam-Forming, User Clustering and Power Allocation for MIMO-NOMA Systems
by Jiayin Wang, Yafeng Wang and Jiarun Yu
Sensors 2022, 22(3), 1129; https://doi.org/10.3390/s22031129 - 2 Feb 2022
Cited by 8 | Viewed by 2278
Abstract
In this paper, we consider the optimal resource allocation problem for multiple-input multiple-output non-orthogonal multiple access (MIMO-NOMA) systems, which consists of beam-forming, user clustering and power allocation, respectively. Users can be divided into different clusters, and the users in the same cluster are [...] Read more.
In this paper, we consider the optimal resource allocation problem for multiple-input multiple-output non-orthogonal multiple access (MIMO-NOMA) systems, which consists of beam-forming, user clustering and power allocation, respectively. Users can be divided into different clusters, and the users in the same cluster are served by the same beam vector. Inter-cluster orthogonality can be guaranteed based on multi-user detection (MUD). In this paper, we propose a three-step framework to solve the multi-dimensional resource allocation problem. In step 1, we propose a beam-forming algorithm for a given user cluster. Specifically, fractional transmitting power control (FTPC) is applied for intra-cluster power allocation. The considered beam-forming problem can be transformed into a non-constrained one and the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) method is applied to obtain the optimal solution. In step 2, optimal user clustering is further considered. Channel differences and correlations are both involved in the design of user clustering. By assigning different weights to the two factors, we can produce multiple candidate clustering schemes. Based on the proposed beam-forming algorithm, beam-forming can be done for each candidate clustering scheme to compare their performances. Moreover, based on the optimal user clustering and beam-forming schemes, in step 3, power allocation can be further optimized. Specifically, it can be formalized as a difference of convex (DC) programming problem, which is solved by successive convex approximation (SCA) with strong robustness. Simulations results show that the proposed scheme can effectively improve spectral efficiency (SE) and edge users’ data rates. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

21 pages, 646 KiB  
Article
Data-Driven and Model-Driven Joint Detection Algorithm for Faster-Than-Nyquist Signaling in Multipath Channels
by Xiuqi Deng, Xin Bian and Mingqi Li
Sensors 2022, 22(1), 257; https://doi.org/10.3390/s22010257 - 30 Dec 2021
Viewed by 2251
Abstract
In recent years, Faster-than-Nyquist (FTN) transmission has been regarded as one of the key technologies for future 6G due to its advantages in high spectrum efficiency. However, as a price to improve the spectrum efficiency, the FTN system introduces inter-symbol interference (ISI) at [...] Read more.
In recent years, Faster-than-Nyquist (FTN) transmission has been regarded as one of the key technologies for future 6G due to its advantages in high spectrum efficiency. However, as a price to improve the spectrum efficiency, the FTN system introduces inter-symbol interference (ISI) at the transmitting end, whicheads to a serious deterioration in the performance of traditional receiving algorithms under high compression rates and harsh channel environments. The data-driven detection algorithm has performance advantages for the detection of high compression rate FTN signaling, but the current related work is mainly focused on the application in the Additive White Gaussian Noise (AWGN) channel. In this article, for FTN signaling in multipath channels, a data and model-driven joint detection algorithm, i.e., DMD-JD algorithm is proposed. This algorithm first uses the traditional MMSE or ZFinear equalizer to complete the channel equalization, and then processes the serious ISI introduced by FTN through the deepearning network based on CNN or LSTM, thereby effectively avoiding the problem of insufficient generalization of the deepearning algorithm in different channel scenarios. The simulation results show that in multipath channels, the performance of the proposed DMD-JD algorithm is better than that of purely model-based or data-driven algorithms; in addition, the deepearning network trained based on a single channel model can be well adapted to FTN signal detection under other channel models, thereby improving the engineering practicability of the FTN signal detection algorithm based on deepearning. Full article
(This article belongs to the Topic Wireless Communications and Edge Computing in 6G)
Show Figures

Figure 1

Back to TopTop