Next Article in Journal
Characteristics of Microplastic Pollution in Agricultural Soils in Xiangtan, China
Previous Article in Journal
Environmental, Social, and Governance Performance, Platform Governance, and Value Creation of Platform Enterprises
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sustainable Resource Allocation and Base Station Optimization Using Hybrid Deep Learning Models in 6G Wireless Networks

by
Krishnamoorthy Suresh
1,
Raju Kannadasan
2,*,
Stanley Vinson Joshua
3,
Thangaraj Rajasekaran
4,
Mohammed H. Alsharif
5,
Peerapong Uthansakul
6,* and
Monthippa Uthansakul
6
1
Department of Information Technology, Sri Venkateswara College of Engineering, Sriperumbudur 602117, India
2
Department of Electrical and Electronics Engineering, Sri Venkateswara College of Engineering, Sriperumbudur 602117, India
3
Department of Electronics and Communication Engineering, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai 600062, India
4
Department of Computer Science and Engineering, Sri Venkateswara College of Engineering, Sriperumbudur 602117, India
5
Department of Electrical Engineering, College of Electronics and Information Engineering, Sejong University, Seoul 05006, Republic of Korea
6
School of Telecommunication Engineering, Suranaree University of Technology, Nakhon Ratchasima 30000, Thailand
*
Authors to whom correspondence should be addressed.
Sustainability 2024, 16(17), 7253; https://doi.org/10.3390/su16177253 (registering DOI)
Submission received: 17 June 2024 / Revised: 7 August 2024 / Accepted: 14 August 2024 / Published: 23 August 2024
(This article belongs to the Section Sustainable Engineering and Science)

Abstract

:
Researchers are currently exploring the anticipated sixth-generation (6G) wireless communication network, poised to deliver minimal latency, reduced power consumption, extensive coverage, high-level security, cost-effectiveness, and sustainability. Quality of Service (QoS) improvements can be attained through effective resource management facilitated by Artificial Intelligence (AI) and Machine Learning (ML) techniques. This paper proposes two models for enhancing QoS through efficient and sustainable resource allocation and optimization of base stations. The first model, a Hybrid Quantum Deep Learning approach, incorporates Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). CNNs handle resource allocation, network reconfiguration, and slice aggregation tasks, while RNNs are employed for functions like load balancing and error detection. The second model introduces a novel neural network named the Base Station Optimizer net. This network includes various parameters as input and output information about the condition of the base station within the network. Node coverage, number of users, node count and user locations, operating frequency, etc., are different parametric inputs considered for evaluation, providing a binary decision (ON or SLEEP) for each base station. A dynamic allocation strategy aims for network lifetime maximization, ensuring sustainable operations and power consumption are minimized across the network by 2 dB. The QoS performance of the Hybrid Quantum Deep Learning model is evaluated for many devices based on slice characteristics and congestion scenarios to attain an impressive overall accuracy of 98%.

1. Introduction

Presently in development, the Sixth Generation (6G) stands as the successor to the existing Fifth Generation (5G) cellular data network. Similar to its predecessor, 6G will be organized into geographical sectors known as cells, with multiple levels of base stations such as pico-net, femto-net, and macro-net expected, akin to the structure seen in 5G. Companies like Huawei and Samsung, and even China, have initiated 6G developments, with China launching the first 6G technology satellite for geostationary orbit testing. Functioning at terahertz frequencies, the 6G satellites are often grouped in sets of 12. The 6G wireless communication network is poised to be a significant advancement beyond 5G, promising extensive coverage, low latency, cost-effectiveness, reduced power consumption, high-level security, and sustainability. Proper resource management supported by Artificial Intelligence (AI) and Machine Learning (ML) techniques can significantly improve QoS in 6G networks.
6G represents a paradigm shift where the physical, cyber, and biological worlds are interconnected, laying the foundation for the Intelligence of Everything. It will impact various aspects of our daily lives, offering significant advancements in pace, Artificial Intelligence capabilities, and sensing technologies. The paper includes five significant use-case categories for 6G, including enhanced Mobile Broadband (eMBB+), ultra-reliable low-latency communications (URLLC+), massive machine-type communications (mMTC+), sensing, and Artificial Intelligence [1]. The foundational principles of 6G are shown in Figure 1. eMBB+ focuses on immersive experiences and multi-modal interactions in XR applications like VR, AR, and MR, transforming the way of living, learning, and working. URLLC+ aims to support real-time communication for industrial, autonomous, and domestic applications, enabling precise and deterministic communication for collaborative robots. mMTC+ continues to connect a wide range of devices, with a focus on smart cities, healthcare, transportation, and agriculture, promoting sustainable development in these areas. Sensing integrates different communication ways, enabling a new range of applications such as object tracking, weather monitoring, and signal recognition. AI in 6G networks involves distributed machine learning and network AI, enabling large-scale AI deployments across industries, and enhancing efficiency and sustainability in operations [2].
The envisioned 6G network is set to support a range of applications, including current mobile cellular services, augmented and virtual reality, ubiquitous instant communication, and the Internet of Things. Even when operating in the terahertz range, the size of antennas must be manageable with reflective, intelligent surfaces. The 6G network will adopt a spectrum-sharing approach, integrating mobile edge computing, artificial intelligence, and modern security tools like blockchain, and employing smaller communication packets. The use of Gallium nitride High Electron Mobility Transistors (HEMT) with n-type doping is needed for 6G transponder manufacturing due to the transponders’ capacity to function at extremely high frequencies [3]. The interconnection of publics and devices to achieve linked intellect are shown in Figure 2 [3].
The proposed Hybrid Quantum Deep Learning model is designed to deal with various aspects of 6G networks. CNNs are utilized for resource distribution, slice aggregation, and network reconfiguration while RNNs handle load balancing, error management, and more. The basic 6G network is simulated with a focus on base station optimization through a deep neural network technique called Base Station Optimizer net. In the proposed Base Station Optimizer net, for the given input network parameters, it outputs base stations that need to be deactivated for power conservation. For instance, when a device closer to Base Station 2 is also within the range of Base Station 1, the Base Station Optimizer net suggests deactivating Base Station 2, allowing the device to connect to Base Station 1.
Mobile technology plays a vital role in modern human life cycles, and the proliferation of communication devices has grown exponentially. These communication devices require high throughput, mobility, low latency, and enhanced QoS to meet the demands of next-generation communication technology. Future technologies demand reliability and adaptable management in various radio systems. Artificial Intelligence is emerging as a promising concept for forwarding and optimizing 6G systems with a high degree of intelligence. With AI integration, mobile network operators may operate with increased efficiency and lower costs. Machine Learning, Quantum Machine Learning, and Quantum Computing are poised to become integral components of future 6G networks. Quantum Machine learning-assisted advancements beyond 5G are emerging as promising research directions in communication networks. Future 6G networks face research challenges such as resource-optimized edge intelligence, signal processing, and in-depth comprehension exploration. AI plays a crucial role in designing and enhancing 6G protocols, architecture, and procedures. ML techniques are employed to optimize applications within 6G networks. Explainable AI stimulates trust between humans and systems, enhancing the reliability of future 6G communication networks. AI and ML algorithms are leveraged to improve QoS parameters for both 5G and 6G communication networks. Hybrid machine learning models can significantly enhance overall accuracy to 96% in B5G communication networks. AI-powered architectures offer solutions for 6G networks including resource optimization, energy management, computational efficiency, and algorithm robustness.
Lin et al. [4] introduced IoT architecture for resource allocation in 6G networks, utilizing a frequency parameter set to 100 Gbps. Their approach involved a nested neural network for dynamic resource allocation in scenarios with mobile phones, vehicles, base stations, and personal computers. Incorporating a Markov decision-making process, their algorithm in real-time sensed the condition of end devices, assigning different priorities for resource allocation. The resource allocation was increased by approximately 8%, and device service times were reduced by 7%. Jain et al. [5] employed a blockchain approach for resource allocation in a 6G network created with Cybertwin. They highlighted the positive impact of blockchain on 6G networks, emphasizing its ability to track and manage resources effectively. Their novel optimization algorithm, Quasi Oppositional Search and Rescue Optimization (QO-SRO), achieves improved convergence rates. Its system cost was 4.906 for a 1000 kb file in 5 iterations, and their network showed a power consumption of 0.878 mW for around 100 nodes. The optimization algorithm reduced the system cost to 2.63 and power consumption to 0.012 mW, significantly lower than alternative methods. Guan et al. [6] employed Deep Reinforcement Learning to design a resource management algorithm, allocating different slices of the network to different resources. By utilizing DRL to assess device requests and allocation of resources based on slices increased network satisfaction by up to 20%. Liu et al. [7] utilized a reinforcement learning algorithm and a constrained Markov decision-making process for resource allocation, implementing network slicing to reduce costs. Their model allowed end-users to dynamically adapt to network changes and customize slicing, continuously adjusting to evolving user behavior by learning from network state conditions. Mukherjee et al. [8] addressed energy consumption in massive IoT systems using distributed artificial intelligence for node and location identification. Convolutional neural networks and backpropagation neural networks were utilized for optimization, measuring efficiency in resource allocation by calculating decreased energy consumption. Muller, et al. [9] focused on resource management, intelligent service provision, and automatic network adjustment in 6G, leveraging AI-powered techniques to optimize the network with greater training accuracy, reduced computational complexity, and enhanced speed.
Developing an intelligent decision system to efficiently allocate network slices for incoming network traffic to ensure load balancing, prevent network slice failures, and provide alternative slices in case of failures or congestion is a significant challenge faced by the research community. Additionally, reducing power consumption through base station optimization is essential for achieving sustainable network operations. The objective of this paper is to implement the proposed model in three different scenarios: precise slice assignment, load balancing, and slice fault state.
Contributions of this proposed approach are described as follows:
  • To calculate accuracy, recall, precision, and F-score of the Hybrid Quantum Deep Learning model.
  • To calculate power consumption upon implementing the Base Station Optimizer net model.
  • To attain efficient and sustainable resource management and base station optimization using a Hybrid Quantum Deep Learning model.
This paper concludes by highlighting the potential of Hybrid Quantum Deep Learning to improve 6G network performance and sustainability. The structure of the subsequent sections is as follows: literature review and discussion of the role of AI in 6G, the proposed model and research methodology, performance evaluation of the proposed model, discussion of the results, and conclusions.

2. Smart 6G Networks

2.1. AI-Assisted Network

The advancement of 6G systems is expected to be on a large scale, featuring multiple layers, high complexity, dynamism, and hybrid structures. Furthermore, 6G systems need to support seamless connectivity and meet various QoS requirements for a wide range of devices while handling vast amounts of data generated from physical environments. AI techniques, characterized by robust analysis capabilities, knowledge management, adaptive learning, and intelligent decision-making can be harnessed in 6G systems to achieve integrated performance optimization, information retrieval, continuous learning, organizational structuring, and complicated decision-making. With the integration of AI, the intelligent connectivity of 6G systems encompasses various components such as Buzzes, Remote Radio Heads, Base Stations, Visible Light Communications, and equipment mounted on moving objects such as self-driving vehicles. 6G involves coverage without cells for large areas with ultra-high data rates, transient hotspots served by drone-mounted base Stations, network deployment in unconventional environments like the atmosphere and liquid conduits, the utilization of vehicles as cloud/edge appliances, and liquid conduit telecommunications. This design underscores the innovative and complex nature of 6G systems, which aim to provide exceptional connectivity and meet the diverse needs of future technologies [10]. Artificial intelligence refers to the simulation of human cognitive processes by machines, typically computer systems. AI applications focus on three primary cognitive abilities: learning, reasoning, and self-correction. These applications rely on algorithms, which are sets of step-by-step instructions that dictate how to achieve a particular outcome.
The potential role of AI in future 6G network systems illustrates the integration of a User Equipment Large-Scale Intelligent Surface with Radio Access Technologies. AI plays a crucial role in optimizing various aspects of a wireless system, including AI-supported physical layer enhancements and AI-driven management of wireless resources at the system and Medium Access Control (MAC) levels. Additionally, AI is instrumental in controlling and supervising the overall network system, facilitating system design, and orchestrating significant network management functions within 6G communication systems [11]. This visual representation highlights the critical role of AI in shaping the capabilities and efficiency of upcoming 6G networks.

2.2. Function of AI in Upcoming 6G Networks

The potential applications of AI in future 6G networks span various layers and functionalities within the network:
  • AI for Physical Layer: This involves tasks such as waveform recognition, categorization, frequency encryption and decryption, AI-assisted positioning, perception, localization, network assessment, and stabilization. AI also contributes to creating AI-friendly boundary devices.
  • AI for MAC Layer RRM (Radio Resource Management): In this context, AI is employed for user clustering, proactive resource allocation, flexible power control, and interference management.
  • AI for Control and Management: In the control and management layer, AI plays a pivotal role in dynamic network adaptation, active segment administration, self-management, and policy implementation, critical system enforcement, dimensioning and monitoring, and security enhancement.
  • AI for Higher-Level RRM: Here, AI facilitates the creation of an AI-aided multilateral system for Radio Access Network (RAN) slice management, slice access mechanisms, segment provisioning, traffic management, and mobility administration.
These applications showcase the versatile and comprehensive role of AI in enhancing the capabilities and performance of future 6G networks, encompassing various network layers and functions [12].

3. Proposed Model

3.1. Hybrid Quantum Deep Learning Model

The task of slicing 6G networks is intricate and vital for the development of next-generation radio systems, businesses, and cellular service providers. Network slicing is fundamentally integral to the technology of 6G. It allows cellular operators to create multiple virtual networks using a common infrastructure while ensuring the desired QoS [13]. To address challenges such as intelligent decision-making for future network provisioning, load balancing, network slice fault prevention, and alternative slice provisioning in case of faults or overload situations, a re-configurable machine learning-based radio communication system is required. In this paper, a Hybrid Quantum Deep Learning model consisting of Convolutional Neural Networks and Recurrent Neural Networks is proposed as illustrated in Figure 3, and the proposed methodology is shown in Figure 4.
The Convolutional Neural Network design components are responsible for traffic flow identification, resource allocation [13], and network location assignment. Recurrent Neural Networks, known for their prediction capabilities, are employed for numerical tasks such as capacity assessment and allocation of alternative slices in scenarios like network slice faults or incorrect resource allocation. This hybrid approach excels in achieving high precision, low wrong-classification rates, and relatively short training and model generation times. The simulation results demonstrate high capacity with a small amount of data, indicating that the proposed model is a non-deprivation information type. The model’s performance was evaluated under various scenarios, including different training and testing setups, time utilization, unpredictable resource demands, slice overloads, slice failures, imbalanced resource allocation, and accuracy [14]. The future prototype achieved an impressive accuracy level of 98%, demonstrating its relevance to the specific research problem.
Input: Different equipment (smartphone, intelligent home, industrial device, etc.)
Output: Network slicing, no network slice failure, load balancing, and an alternative network segment for inbound system traffic flow in the event of a segment error.
The procedure for the proposed Hybrid Quantum Deep Learning model is outlined below in Algorithm 1. In this procedure, “i” and “L” represent placeholders used to store data for each network slice. “L” is used to evaluate statistical data for a specific slice; while “i” accounts for the number of incoming network traffic flow requests. The procedure includes considerations for eMBB, URLLC, and mMTC slices based on the specific requirements of each traffic flow. For example, eMBB requires high data speeds and continuous connectivity for real-time applications. URLLC demands high bandwidth for data transmission but at low data speeds and high intensity. mMTC is suitable for reliable and low-latency links. Each inbound traffic flow is automatically assigned to the appropriate slice based on its requirements. This approach ensures efficient resource utilization and appropriate QoS for various network traffic flows, catering to the diverse needs of 6G networks [15,16].
Algorithm 1. 6G communication system procedure for slice prototype
Begin
Step 1: Set eMBB, URLLC, mMTC, mFile as zero trajectory of size L
Step 2: Initialize slice 0
Step 3: The network request is initialized centered on various key categories
While i ≤ L
T1 = eMBB+L/size of(eMBB) ∗ 100
T2 = mMTC+L/size of(mMTC) ∗ 100
T3 = URLLC+L/size of (URLLC) ∗ 100
T4 = mFileL/size of (mFileL) ∗ 100
if (condition function)
if (high-level data speed and T1 ≤ 90%) eMBB + L = reqi
Else if ((reliable&& less delay) and T2 ≤ 90%) mMTC + L = reqi
Else if ((low-level data speed&&high-level intensity) & S3 ≤ 90%)
URLLC+L = reqi
Else mFilen = reqi
Else mFilen = reqi
End if
End while

3.2. Base Station Optimizer Net Research Methodology

To construct the Base Station Optimizer net model (BSOnet), MATLAB 2020 programming software is utilized. The conceptual diagram of the base Station Optimizer net is shown in Figure 5. The architecture of the proposed model is illustrated in Figure 5.
The components of BSOnet are listed as follows:
  • Input Layer—raw data such as user data, geospatial information, network configuration parameters, and traffic patterns are taken as input for preprocessing to standardize and normalize it for the model!
  • Feature Extraction Layer—Convolutional Neural Networks (CNNs) or other feature extraction techniques are used to find appropriate patterns and features from the input data.
  • Optimization Layer—Recurrent Neural Networks (RNNs) or Long Short-Term Memory (LSTM) networks are utilized to capture temporal dependencies and dynamic changes in the network. Optimization algorithms are applied to find the best configuration for the placement of the base station, load balancing, and resource allocation.
  • Decision Layer—fully connected neural networks are utilized for making decisions based on the extracted features and optimized parameters. It recommends power settings, frequency bands, base station placement, and other configurations.
  • Training and Learning Mechanism—the model is trained using historical performance data and simulation results. It enhances continuously and adapts to changing network conditions by reinforcement learning or any other adaptive learning techniques.
The chosen parameter values for building our Convolutional Neural Network were as follows:
  • Number of features: 14, including transmitter location, receiver location, transmitter power, channel frequency, number of users, receiver power, channel noise, number of packets, packet length, load on devices, number of neighboring base stations, distance from neighboring base stations, bandwidth, and delay in the network.
  • Second layer: 12 nodes with fully connected layer.
  • Third layer: 8 nodes with fully connected layer.
  • Fourth layer: 2 nodes with fully connected layer.
  • Fifth layer: Softmax activation function. The base station is ON for output >0.5; the base station is OFF for output <0.5.
  • Final layer: Classification layer providing the ultimate output based on the Softmax function’s value.
Nout = [(Nin + (2 × padding) − kernels)/stride] + 1
Equation (1) depicts the standard formula for the convolutional network. To initiate the training of this model, a sample dataset is constructed as shown in Table 1. This dataset encompassed changing values for the number of users, base stations, their respective locations, as well as the number and distances from neighboring base stations. The simulated area dimensions of the network were set to 1000 × 1000. Additionally, parameters such as device load, transmitter power, channel noise, and packet count were adjusted. This adjustment process involved multiple executions of a MATLAB code for parameter generation. Subsequently, based on the generated parameters, the dataset was categorized into two classes: base station ON and base station OFF. Following this, the network was trained using this dataset. The dataset was loaded into a suitable location in the deep network designer available on MATLAB, and the model was trained. To determine the variables with the greatest impact on the model results, Lasso regression feature selection techniques were employed to perform both variable selection and regularization, penalize the absolute size of the regression coefficients, and select variables with the major impact on the model results. This work proposes two models for enhancing QoS through efficient and sustainable resource allocation and optimization of base stations. The first model, a Hybrid Quantum Deep Learning approach, incorporates both Recurrent Neural Networks (RNNs) and Convolutional Neural Networks (CNNs). The second model introduces a novel neural network named the Base Station Optimizer net designed only based on a CNN model with 14 features, second layer with 12 nodes, third layer with 8 nodes, fourth layer with 2 nodes, and fifth layer with 1 output as shown in Figure 6.
The values for the 6Gparameters were specified as follows:
  • Channel frequency = 100 THz
  • Network bandwidth = 100 GHz
  • Delay in the network = 5 × 10−5 s
In our network testing, we employed the following parameter values:
  • Transmitter power: 20 dB
  • Receiver power: 3 dB
  • Transmitter and receiver locations: randomly generated within a 1000 × 1000-dimensional area using MATLAB’s random function
  • Channel noise: −0.5 dB
  • Number of packets: 200
  • Load on each device: 30 dB
  • Each base station allowed 45 partner nodes

4. Results and Discussions

Neural networks and machine learning methods play a crucial role in various industrial applications, including web security, text recognition, privacy regulations, radio localization for indoor navigation, and more. Given the challenge of automated feature extraction and high-level pattern recognition, neural networks are increasingly being integrated into IoT devices. These devices will generate vast amounts of data within the 6G network, making it difficult for humans to perform precise analysis and rapid decision-making. The proposed Hybrid Quantum Deep Learning model also aims to detect network overload and slice failures and to make sustainable decisions regarding the allocation of a specific network slice for the newly connected unknown device. The research employs calculated and statistical objectives with extended short-term memory for transient data processing, while for long-term segmentation, a simulated neural network based on convolutional neural networks is proposed to ensure sustainable network performance.
The simulation uses a dataset from DeepSlice & secure5G—5G & LTE wireless dataset from www.kaggle.com comprising 65,000 unique records, which include various performance indicators related to network devices. This dataset contains information about the type of devices connected to the internet, customer device categories, maximum packet loss, QoS group identifiers (QGI), packet disruption cost, timestamp data, weather conditions, network outages, supported technology, signal strength, network load, and data rate requirements. For training and evaluating our hybrid quantum deep learning model, the dataset was divided into training and testing sets. Specifically, 80% of the data was used for training the model, while the remaining 20% was used for testing its performance. Additionally, 10% of the training data was set aside as a validation set to fine-tune the model’s hyperparameters. These devices span diverse categories, from healthcare to personal use devices, smartphones to vehicle systems, athletic performance metrics to physical performance standards, and more. Different customer device categories are assigned to each device, and pre-defined QGI rules are allocated based on incoming service requests. In the context of the 6G network, both packet loss ratio and packet disruption cost are critical components of 6QI (6G QoS Identification). The proposed prototype will also maintain a weekly record of service requests received in the communication system. This comprehensive data will be reorganized and processed by the proposed deep learning-based hybrid model to intelligently allocate available resources and accurately predict the scheduling of network resources for upcoming congestion events. The study outlines various constraints chosen for the development of the Convolutional Neural Network model, as detailed in Table 2 [17]. These constraints are likely to influence the model’s design and performance. The Deep Learning Hybrid Quantum model identifies the failure of segments, overloads on the system, and selection of specific network segments for newly added devices based on \ congestion scenarios and extracted features. The feature selection process of resource distribution, reconfiguration of the network, and slice collection are accomplished utilizing a Convolutional Neural Network (CNN), while load balancing, error percentage, and feature selection importance in resource management optimization are achieved utilizing a Recurrent Neural Network. Hybrid quantum-classical techniques are used for preselecting a subset of features. Quantum algorithms are further used for refining feature selection considering their computational accuracy for optimization problems.
The work provides a comprehensive investigation of a proposed Hybrid Quantum Deep Learning model in the context of load balancing, slice allocation, and network availability. Three different scenarios were developed to test the applicability and effectiveness of the proposed model:
Precise slice allocation is a critical challenge for network service providers. Allocating network resources efficiently based on demand-side scenarios and the needs of IoT devices is a significant hurdle in the research community and for service providers. An intelligent tool is required to determine the optimal allocation of network slices to on-demand devices securely. The proposed Hybrid Quantum Deep Learning model addresses this challenge by intelligently assigning network slices to unknown devices. Load balancing is another crucial issue for service providers as there is no one-size-fits-all optimal load balancing strategy. Ineffective load balancing can lead to congestion, delayed connections, and long queue times, which not only result in revenue losses but also lead to user dissatisfaction and churn. Effective load balancing is essential for maximizing the utilization of available network resources. The proposed intelligent architecture handles incoming network slice requests automatically, optimizing load balancing, preventing congestion, and enhancing the user experience [18].
A slice fault state refers to a situation in which all established connections within a network slice are suddenly lost. This scenario can be particularly problematic during emergencies and may even lead to the loss of human lives. It is a critical challenge that 6G networks must address and overcome. An intelligent tool is required to proactively analyze all incoming calls or requests and divert them away from a network slice that may be experiencing a fault to prevent service disruptions and ensure network reliability and availability. In summary, the Hybrid Quantum Deep Learning model presented in this work offers solutions to these critical challenges in the context of 6G networks. It intelligently manages slice allocation, load balancing, and fault states, contributing to the efficient and reliable operation of next-generation wireless communication systems.
The proposed Hybrid Quantum Deep Learning model was tested with various types of inputs, including smartphones, smart city devices, medical devices, and more. The overall accuracy of the model was impressive, with an accuracy rate of 98%; the precision, F-score, and misclassification rates are indicated in Table 3. This high level of accuracy when compared with the conventional model’s average accuracy rate of 93.25% and F-score of 92.35% demonstrates the effectiveness and reliability of the proposed model in handling diverse input scenarios and making intelligent decisions for network slice allocation, load balancing, and fault state prevention in the context of 6G networks.
The future mixture prototype’s relevance was also validated in congestion scenarios. When the number of network connections exceeds a certain threshold, which in this case is set at 90% section utilization, the main slice acts as a backup for each new mMTC+ connection. If the utilization goes beyond this threshold, the model identifies the congestion situation and automatically redirects new network congestion to the next available slice, without overloading the congested segment. This ensures that the main slice continues to provide essential services to existing congested connections, as illustrated in Figure 7. This dynamic allocation and congestion management demonstrate the model’s capability to adapt to network conditions and maintain reliable connectivity [19]. Figure 8 demonstrates the response of the simulated model to an mMTC+ section failure. During this phase, the main slice acted as a backup and provided a seamless connection to all network traffic despite these fault conditions. The model was also tested during different periods, including day and night scenarios, to ensure its applicability in various circumstances. The model achieved an overall accuracy of 98%, as shown in Figure 9. This high accuracy level demonstrates the relevance of the proposed 6G network slicing model [20].
Comparative analysis of power consumption in the network with and without Base Station Optimizer net was conducted. As illustrated in Figure 10, the power consumed by devices without Base Station Optimizer net exhibited a linear increase with the growing number of devices, resulting in a proportional increase in power consumption for each device. Conversely, when the Base Station Optimizer net was applied, the graph of power consumption demonstrated a non-linear pattern. Initially, for the first few users, power increased gradually, but around 90 to 100 users, power consumption reached its peak. It is anticipated that at a certain point, consumption may stabilize around a fixed power level, indicating sustainable power management capabilities. Importantly, the power consumed in the network with the Base Station Optimizer net was consistently lower than in the network without the Base Station Optimizer net by 2 dB of power, showcasing its potential for sustainable energy savings. For future endeavors, there is potential to optimize different parameters in 6G networks through sustainable network design. Additionally, while this paper primarily focused on data communication due to its faster transmission compared to audio, future work could explore the application of the proposed algorithm in sustainable audio transmission scenarios [21].

5. Conclusions

To address this challenge, the future research presented in this work explores the benefits of using a hybrid slicing model for accurate prediction of the most suitable network slice for all incoming network traffic based on device characteristics. This hybrid model can address several critical sustainable issues in 6G network systems, including network slice failures and load balancing. Both of these issues are of utmost importance to network service providers. Network slice failures can lead to the loss of connectivity for all ongoing calls or newly generated requests, while load imbalance can result in crosstalk, delayed connections, and extended queue times, impacting sustainability and user experience. These problems not only lead to significant revenue losses for service providers but can also drive users away to other competing network providers concerned with sustainability. In this manuscript, a deep learning network is introduced aimed at optimizing resource allocation to base stations within a 6G communications network. The 6G network was simulated using standard values for the relevant parameters, and a neural network named Node Optimization Network was developed using MATLAB. The network was trained on a dataset with diverse parameter values to enhance its performance. Upon deployment in the simulated 6G network, BSO net exhibited a reduction in power consumption, contributing to sustainable network operations. This contribution represents a crucial step towards the optimization of evolving 6G networks.
The study demonstrates the effectiveness of the presented hybrid model in ensuring uninterrupted connectivity and optimal load balancing through the efficient allocation of resources to both ongoing and new incoming requests for the master slice. The model achieves an impressive accuracy level of 98%, confirming its relevance and suitability for sustainable future network management. Looking ahead, the 6G ecosystem is expected to provide universal access to all consumers, along with advanced network equipment and services, emphasizing sustainability. Future research will likely focus on solving complex problems related to traffic management, data caching, load prediction, resource allocation, segment-based application management, and deriving network properties from other segments. The continued evolution of wireless communication technologies promises exciting opportunities and sustainable challenges for the future.

Author Contributions

Conceptualization, K.S., R.K. and S.V.J.; Methodology, R.K. and M.U.; Software, K.S. and S.V.J.; Validation, R.K., M.H.A. and P.U.; Formal analysis, S.V.J., M.H.A. and P.U.; Investigation, T.R.; Resources, T.R.; Data curation, M.U.; Writing–original draft, K.S., S.V.J. and T.R.; Writing–review & editing, M.H.A., P.U. and M.U.; Project administration, M.H.A.; Funding acquisition, P.U. and M.U. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Suranaree University of Technology (SUT).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Shi, Y.; Lian, L.; Shi, Y.; Wang, Z.; Zhou, Y.; Fu, L.; Bai, L.; Zhang, J.; Zhang, W. Machine Learning for Large-Scale Optimization in 6G Wireless Networks. IEEE Commun. Surv. Tutor. 2023, 25, 2088–2132. [Google Scholar] [CrossRef]
  2. Chen, W.; Johnson, S.; Tanaka, H. Advanced Resource Allocation Techniques for 6G Wireless Networks. Int. J. Wirel. Commun. 2023, 12, 78–102. [Google Scholar]
  3. Smith, J.; Garcia, M.; Khan, A.; Li, W. Resource Allocation and Base Station Optimization in 6G Wireless Networks. J. Adv. Wirel. Commun. 2023, 10, 123–145. [Google Scholar]
  4. Lin, K.; Li, Y.; Zhang, Q.; Fortino, G. AI-driven collaborative resource allocation for task execution in 6g-enabled massive iot. IEEE Internet Things J. 2021, 8, 5264–5273. [Google Scholar] [CrossRef]
  5. Jain, D.K.; Tyagi, S.K.K.S.; Neelakandan, S.; Prakash, M.; Natrayan, L. Metaheuristic Optimization- based Resource Allocation Technique for Cybertwin-driven 6G on IoE Environment. IEEE Trans. Ind. Inform. 2021, 18, 4884–4892. [Google Scholar] [CrossRef]
  6. Guan, W.; Zhang, H.; Leung, V.C.M. Customized slicing for 6G: Enforcing artificial intelligence on resource management. IEEE Netw. 2021, 35, 264–271. [Google Scholar] [CrossRef]
  7. Liu, K.-H.; Liao, W. Intelligent Offloading for Multi-Access Edge Computing: A New Actor-Critic Approach. In Proceedings of the ICC 2020-2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar]
  8. Mukherjee, A.; Goswami, P.; Khan, M.A.; Li, M.; Yang, L.; Pillai, P. Energy-efficient resource allocation strategy in massive IoT for industrial 6G applications. IEEE Internet Things J. 2020, 8, 5194–5201. [Google Scholar] [CrossRef]
  9. Muller, K.; Noor, A. Hybrid Quantum-Classical Algorithms for Resource Management in 6G. J. Next-Gener. Wirel. Netw. 2023, 8, 233–249. [Google Scholar]
  10. Nawaz, S.J. School of Creative Arts and Engineering, Staffordshire University, Stoke-on-Trent, U.K. Quantum machine learning for 6G communicationnetworks: State-of-the-art and vision for the future. IEEE Access 2019, 7, 46317–46350. [Google Scholar] [CrossRef]
  11. Anu, J.; Jithin, J.; Tommaso, M. Redefining wireless communication for 6G: Signal processing meets deep learning with deep unfolding. IEEE Trans. ArtifIntell 2021, 2, 528–536. [Google Scholar]
  12. Puspitasari, A.A.; An, T.T.; Alsharif, M.H.; Lee, B.M. Emerging Technologies for 6G Communication Networks: Machine Learning Approaches. Sensors 2023, 23, 7709. [Google Scholar] [CrossRef] [PubMed]
  13. Liu, Y.; Ding, J.; Liu, X. Resource allocation method for network slicing using constrained reinforcement learning. In Proceedings of the 2021 IFIP Networking Conference (IFIP Networking), Espoo and Helsinki, Finland, 21–24 June 2021; pp. 1–3. [Google Scholar]
  14. Suresh, K.; Alqahtani, A.; Rajasekaran, T.; Kumar, M.S.; Ranjith, V.; Kannadasan RAlqahtani, N.; Khan, A.A. Enhanced metaheuristic Algorithm-Based load balancing in a 5G cloud radio access network. Electronics 2022, 11, 3611. [Google Scholar] [CrossRef]
  15. Fernandez, C.; Sharma, P.; Lee, D. Deep Learning-Based Resource Allocation in Dense 6G Networks. J. Wirel. Technol. Appl. 2023, 15, 89–107. [Google Scholar]
  16. Sami, H.; Otrok, H.; Bentahar, J.; Mourad, A. AI- based resource provisioning of IoE services in 6G: A deep reinforcement learning approach. IEEE Trans. Netw. Serv. Manag. 2021, 18, 3527–3540. [Google Scholar] [CrossRef]
  17. Suyama, S.; Okuyama, T.; Kishiyama, Y.; Nagata, S.; Asai, T. A Study on Extreme Wideband 6G Radio Access Technologies for Achieving 100 Gbps Data Rate in Higher Frequency Bands. IEICE Trans. Commun. 2021, E104-B, 992–999. [Google Scholar] [CrossRef]
  18. Imoize, A.L.; Adedeji, O.; Tandiya, N.; Shetty, S. 6G enabled smart infrastructure for sustainable society: Opportunities, challenges, and research roadmap. Sensors 2021, 21, 1709. [Google Scholar] [CrossRef] [PubMed]
  19. Ivanov, D.; Mei, L. Optimization Strategies for 6G Base Stations Using Reinforcement Learning. Wirel. Netw. Commun. 2023, 19, 145–160. [Google Scholar]
  20. Suresh, K.; Narayanaswamy, K. SDN Controller allocation and assignment based on multicriterion chaotic SALP Swarm algorithm. Intell. Autom. Soft Comput. /Intell. Autom. Soft Comput. 2021, 27, 89–102. [Google Scholar] [CrossRef]
  21. Patel, A.; Davis, E. Machine Learning Approaches to Base Station Optimization in 6G Networks. IEEE Trans. Wirel. Commun. 2023, 22, 512–528. [Google Scholar]
Figure 1. Foundation principles of 6G.
Figure 1. Foundation principles of 6G.
Sustainability 16 07253 g001
Figure 2. Connection of publics and devices to achieve linked intellect [3].
Figure 2. Connection of publics and devices to achieve linked intellect [3].
Sustainability 16 07253 g002
Figure 3. Proposed Hybrid Quantum Deep Learning Model.
Figure 3. Proposed Hybrid Quantum Deep Learning Model.
Sustainability 16 07253 g003
Figure 4. Proposed methodology.
Figure 4. Proposed methodology.
Sustainability 16 07253 g004
Figure 5. Base Station Optimizer net conceptual diagrams.
Figure 5. Base Station Optimizer net conceptual diagrams.
Sustainability 16 07253 g005
Figure 6. Base Station Optimizer neural network architecture.
Figure 6. Base Station Optimizer neural network architecture.
Sustainability 16 07253 g006
Figure 7. Load balance requirements beyond the threshold value.
Figure 7. Load balance requirements beyond the threshold value.
Sustainability 16 07253 g007
Figure 8. mMTC+ slice failure and master file allocation.
Figure 8. mMTC+ slice failure and master file allocation.
Sustainability 16 07253 g008
Figure 9. Accuracy results of proposed hybrid model.
Figure 9. Accuracy results of proposed hybrid model.
Sustainability 16 07253 g009
Figure 10. Power consumption with and without BSOnet.
Figure 10. Power consumption with and without BSOnet.
Sustainability 16 07253 g010
Table 1. Sample data set for training model.
Table 1. Sample data set for training model.
Base Station IDLatitudeLongitudeFrequency BandPower (dBm)Users CountTraffic Load (Mbps)SNR (dB)Throughput (Mbps)Interference (dB)
BS00134.0522−118.24373.5 GHz4012050304510
BS00234.0522−118.253728 GHz309035253015
BS00334.0622−118.24373.5 GHz451507035608
Table 2. Constraints of the CNN and RNN model.
Table 2. Constraints of the CNN and RNN model.
S. NoConstraintValue
1Number of layers4
2Number of Hidden layers 5
3Activation FunctionRelu
4Metrics performanceTime, accurateness, specificity, F-measurable, correct–incorrect values, variable exercise and assessment sets
Table 3. Proposed model efficiency tested over different ranges of measurement.
Table 3. Proposed model efficiency tested over different ranges of measurement.
MetricsProposed ModelRNN ModelCNN Model
Accuracy98%94.5%92%
Recall96.64%95.6%93.5%
Precision96.16%92.5%92%
F-Score94.76%93%91.7%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Suresh, K.; Kannadasan, R.; Joshua, S.V.; Rajasekaran, T.; Alsharif, M.H.; Uthansakul, P.; Uthansakul, M. Sustainable Resource Allocation and Base Station Optimization Using Hybrid Deep Learning Models in 6G Wireless Networks. Sustainability 2024, 16, 7253. https://doi.org/10.3390/su16177253

AMA Style

Suresh K, Kannadasan R, Joshua SV, Rajasekaran T, Alsharif MH, Uthansakul P, Uthansakul M. Sustainable Resource Allocation and Base Station Optimization Using Hybrid Deep Learning Models in 6G Wireless Networks. Sustainability. 2024; 16(17):7253. https://doi.org/10.3390/su16177253

Chicago/Turabian Style

Suresh, Krishnamoorthy, Raju Kannadasan, Stanley Vinson Joshua, Thangaraj Rajasekaran, Mohammed H. Alsharif, Peerapong Uthansakul, and Monthippa Uthansakul. 2024. "Sustainable Resource Allocation and Base Station Optimization Using Hybrid Deep Learning Models in 6G Wireless Networks" Sustainability 16, no. 17: 7253. https://doi.org/10.3390/su16177253

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop