Next Article in Journal
User Scheduling and Path Planning for Reconfigurable Intelligent Surface Assisted MISO UAV Communication
Previous Article in Journal
Corrective Evaluation of Response Capabilities of Flexible Demand-Side Resources Considering Communication Delay in Smart Grids
Previous Article in Special Issue
Development of an Integrated Longitudinal Control Algorithm for Autonomous Mobility with EEG-Based Driver Status Classification and Safety Index
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Supporting Immersive Video Streaming via V2X Communication

1
Department of Computer Science & Information Engineering, National Dong Hwa University, Shoufeng, Hualien 974301, Taiwan
2
Lookout, Inc., Taipei 110207, Taiwan
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(14), 2796; https://doi.org/10.3390/electronics13142796
Submission received: 24 June 2024 / Revised: 11 July 2024 / Accepted: 14 July 2024 / Published: 16 July 2024
(This article belongs to the Special Issue Autonomous Vehicles Technological Trends, 2nd Edition)

Abstract

:
With the rapid advancement of autonomous driving and network technologies, future vehicles will function as network nodes, facilitating information transmission. Concurrently, in-vehicle entertainment systems will undergo substantial enhancements. Beyond traditional broadcasting and video playback, future systems will integrate immersive applications featuring 360-degree views and six degrees of freedom (6DoF) capabilities. As autonomous driving technology matures, vehicle passengers will be able to engage in a broader range of entertainment activities while on the move. However, this evolution in video applications will significantly increase bandwidth demand for vehicular networks, potentially leading to bandwidth shortages in congested traffic areas. This paper presents a method for bandwidth allocation for vehicle video applications within the landscape of vehicle-to-everything (V2X) networks. By utilizing a millimeter-wave (mmWave), terahertz (THz) frequency band, and cell-free (CF) extremely large-scale multiple-input multiple-output (XL-MIMO) wireless communication technologies, we provide vehicle passengers with the necessary bandwidth resources. Additionally, we address bandwidth contention issues in congested road segments by incorporating communication methods tailored to the characteristics of vehicular environments. By classifying users and adjusting according to the unique requirements of various multimedia applications, we ensure that real-time applications receive adequate bandwidth. Simulation experiments validate the proposed method’s effectiveness in managing bandwidth allocation for in-vehicle video applications within V2X networks. It increases the available bandwidth during peak hours by 32%, demonstrating its ability to reduce network congestion and ensure smooth playback of various video application types.

1. Introduction

Recent literature indicates that numerous studies have proposed improvements in compression efficiency and bitrate reduction for multimedia applications and explored the potential of emerging network technologies for mobile networks. As autonomous driving technology matures, future vehicles will not only possess autonomous driving capabilities but also offer more diverse onboard entertainment systems. Passengers will be able to participate in immersive remote meetings and experiments, as well as enjoy immersive games and high-quality streaming media services. These emerging multimedia applications will demand substantial bandwidth and extremely high throughput within vehicular networks.
To address the streaming requirements in vehicular networks, Feng et al. [1] examined caching strategies for the dynamic characteristics of vehicular networks and proposed an adaptive dynamic resource management scheme for hierarchical cooperative caching networks. Shin et al. [2] utilized a 3D vector mobility prediction algorithm in their protocol to predict vehicle mobility, enabling flexible responses to dynamic topology changes and achieving high frame transmission rates. Alaya et al. [3] introduced an inter-layer approach for multimedia streaming in vehicular networks, dynamically adjusting transmission rates to enhance video playback stability and packet transmission rates in vehicles. However, these studies have not fully considered the advantages of future 6G network technologies in vehicle communications and the impact of bandwidth demand differences for various multimedia applications.
Recently, the popularity of smart devices and communication technologies has driven rapid growth in immersive games and high-quality streaming media services. These emerging multimedia applications demand substantial bandwidth and extremely high network throughput [4,5]. The quality of the video experience (QoE) has a significant impact on the likelihood of a user making a repeat purchase. Conversely, poor video QoE can result in a loss of users. Consequently, it is of paramount importance for wireless network operators to ensure that the user-perceived QoE is as high as possible.
To meet these future potential demands, the Moving Picture Experts Group (MPEG) has established various video coding standards. Among these, Advanced Video Coding (AVC) stands out for its low encoding complexity and broad compatibility, making it the most widely used video codec standard currently [6]. As the next-generation video coding standard, Versatile Video Coding (VVC) offers higher compression efficiency compared to AVC, reducing data size by 75% while maintaining the same level of visual quality [7]. MPEG has also introduced Low Complexity Enhancement Video Coding (LCEVC), which enhances basic codecs with an additional low bitrate layer to improve video compression and transmission efficiency. LCEVC reduces the resolution of the input original video by half to form a low-quality video base layer, then encodes the differences between the low-quality video and the original video to generate an enhancement layer. Upon decoding at the client end, the LCEVC decoder combines the base layer and enhancement layer to produce video with the same resolution as the original, achieving bitrate reduction [8].
A key component of immersive applications is 360-degree video, which demands higher network bandwidth. Therefore, many researchers have studied tile-based strategies for segmenting 360-degree video into multiple tiles through the Field of View (FoV), encoding the tiles within the FoV at high quality, and encoding or discarding the remaining tiles at low quality to reduce bitrate during encoding [9].
6DoF video, also known as volumetric video, allows for translation in horizontal, vertical, and depth directions in addition to the orientation provided by 3DoF, offering a more immersive experience for users. There are two formats for implementing 6DoF video: Multiview Video plus Depth (MVD) [10] and Point Cloud (PtCl) [11]. The MVD format encodes each view captured by the cameras individually before synthesizing them. This process can lead to significant bitrate wastage due to the large amount of redundant content between views. To address this, the MPEG Immersive Video (MIV) standard has been developed by MPEG to automatically select reference views, ensuring optimal quality rendering and significantly reducing data bitrate [12]. Additionally, the standard compresses all trimmed views into atlases, which substantially decreases the number of required encoders and decoders [13]. As a result, any conventional 2D video encoder can encode these atlases [14]. MIV operates in two modes: MIV View mode and MIV Atlas mode. MIV View mode selects more base views to preserve scene details, making it ideal for applications that demand high video quality. In contrast, MIV Atlas mode trims views to include only specific ones, thereby reducing data volume and improving transmission efficiency [15,16].
The other 6DoF format is PtCl video. A typical dynamic PtCl for entertainment purposes usually contains about 1 million points per frame, with an uncompressed bandwidth requirement of 3.6 Gbps for 30 frames per second [17]. The number and density of objects in a PtCl scene also affect the bandwidth requirements for PtCl video. PtCl can be projected onto a 2D plane and encoded by traditional video codecs like AVC or VVC, such as in Video-based PtCl Compression (V-PCC) [18,19]. This approach results in a bitstream rate much lower than 3D-based methods. Geometric 3D compression methods, such as Geometry-based PtCl Compression (G-PCC) [20], directly compress PtCl using the characteristics of octree structures. These methods offer high encoding speed but lower compression efficiency, making them suitable for PtCl live video [21]. Shi et al. [22] simplified PtCl by downsampling and resizing during the encoding phase, reconstructing them on the user side upon transmission. Han et al. [23] conditionally transmitted PtCl based on the FoV, either sending the PtCl around the FoV or adjusting the PtCl density, effectively reducing bandwidth requirements. Li et al. [24] divided PtCl into multiple volumetric tiles similar to 360-degree video, implementing transmission allocation schemes for these tiles.
With the ongoing advancement of Intelligent Transportation Systems (ITS) [25], an increasing number of autonomous vehicles are replacing today’s manually driven vehicles. These autonomous vehicles connect with each other and with surrounding roadside units (RSUs), forming vehicular networks, and engage in vehicle-to-infrastructure (V2I) and vehicle-to-vehicle (V2V) communications [26], collectively known as Vehicle-to-Everything (V2X). These extensive connectivity demands, along with in-vehicle multimedia entertainment services, present significant challenges to current wireless network communications.
In recent years, academia and industry have been increasingly focusing on sixth-generation (6G) systems to meet the future information and communications technology demands anticipated by 2030 [27,28]. Light Fidelity (LiFi), based on visible light communications (VLC), is expected to become an indispensable part of 6G systems [29]. With the rising number of mobile devices and the increasing demand for data-intensive applications and multimedia materials (e.g., audio, online gaming, images, video streaming, etc.), LiFi technology is suggested as a potential solution to the spectrum scarcity issue [30]. LiFi operates by controlling light-emitting diodes (LEDs) to generate communication signals. Its transmission is highly efficient due to the unregulated and large bandwidth of visible light. Furthermore, thanks to the use of the visible light band, LiFi is virtually immune to interference from other devices and ensures secure data transmission [31]. Jamuna et al. [32] introduce a design and prototype of a device that utilizes Li-Fi technology for vehicular communication. This device enables secure, efficient, environmentally friendly, and high-speed data transfer. Saravanan et al. [33] examine Li-Fi technology, which utilizes light-emitting diodes as the transmission module. The study explores the characteristics, communication methods, and key system technologies for high-speed data transmission, subsequently applying these in V2V communication.
Additionally, 6G will incorporate a range of emerging communication technologies, such as millimeter-wave (mmWave) and Terahertz (THz) wireless communication. These technologies benefit from their short wavelengths, allowing for significant spatial reuse and enabling higher bandwidth and transmission rates. Researchers like Su et al. [34] addressed content distribution scheduling issues in mmWave vehicular networks, while Lin et al. [35] focused on low-complexity vehicle tracking and resource allocation methods for THz V2I communications in vehicular networks. Lin et al. [36] studied the joint caching and scheduling strategy in 6G vehicular networks, enhancing request hit probability and the number of concurrent transmissions.
However, in traditional cellular networks, signal strength and inter-cell interference severely limit the performance at cell edges, which has always been a major bottleneck for any mobile network. Recently, cell-free (CF) systems have attracted increasing research attention and demonstrated their strong potential in mobile communications. CF systems deploy a considerable number of geographically distributed antenna access points. These access points are connected to a wireless access point control unit via high-speed optical fiber or wireless backhaul/fronthaul links. The wireless access point control unit operates all antenna access points as a whole without cell boundaries, using spatial multiplexing technology to serve all users on the same time–frequency resources. This technology allows vehicles to continuously enjoy network services while moving, avoiding the additional overhead of frequent handovers [37]. Zhao et al. [38] studied the scheduling of communication and computing resources in the mmWave cell-free downlink transmission architecture of urban vehicular networks, improving the performance of low-latency data transmission. As a natural evolution of contemporary large-scale multiple-input multiple-output (MIMO) technology, Extremely Large-Scale MIMO (XL-MIMO) further increases the number of antennas by at least an order of magnitude, thereby unprecedentedly improving the spectral efficiency and spatial resolution of wireless communication and sensing. Consequently, XL-MIMO is considered a key technology for 6G to meet several stringent KPIs, such as peak data rates, spectral efficiency, reliability, and precision in positioning and sensing [39]. Liu et al. [40] studied CF XL-MIMO systems with base stations equipped with XL-MIMO panels in dynamic environments, providing dynamic strategies to address high-dimensional signal processing challenges.
As autonomous driving technology advances, passengers in vehicles will have the opportunity to engage in a broader range of entertainment activities while on the move. This evolution in video applications is expected to significantly increase the demand for bandwidth within vehicular networks, potentially leading to bandwidth shortages, especially in congested traffic areas. To address this challenge, this paper presents a method for bandwidth allocation for vehicle video applications within the framework of V2X networks. The main contributions of this study are as follows:
  • This study employs THz and mmWave frequency band technologies, while also introducing CF XL-MIMO systems to further enhance system coverage, providing efficient and stable bandwidth for constructing a comprehensive vehicular network environment for vehicle passengers.
  • This study adopts different encoding technologies for various types of multimedia applications and dynamically adjusts video quality based on different network conditions to optimize video transmission efficiency.
  • In planning bandwidth allocation, the characteristics of applications and user levels are considered, with bandwidth being prioritized for real-time applications. When similar applications compete for bandwidth, premium users are given priority access. Additionally, when network bandwidth is insufficient, V2V communication technology can be utilized to access bandwidth from non-busy road sections, thereby effectively improving the overall utilization of bandwidth resources.

2. Bandwidth Allocation for Diverse Streaming Services in V2X Communication

In this study, each RSU on the road segment coordinates the allocation of bandwidth. We utilize mmWave and THz bands to provide bandwidth for vehicular multimedia applications. Furthermore, our architecture incorporates a CF XL-MIMO system to improve communication reliability and transmission rates, as depicted in Figure 1.
We consider various types of videos, including traditional 2D video, 360-degree video, and 6DoF video. Both real-time and non-real-time application scenarios are addressed, such as live video and video on demand (VoD). To cater to diverse user needs, we classify users into premium and regular categories. Premium users, through a fee-based model, receive enhanced video quality, featuring higher resolution, lower latency, and a more immersive viewing experience. Regular users have access to standard quality basic services.
Advanced encoding technologies are employed to optimize video quality and transmission efficiency across different types of video content and application scenarios. For 2D and 360-degree videos, VVC encoding technology is used for non-real-time applications, while AVC encoding technology is utilized for real-time applications, ensuring improved transmission efficiency while maintaining high video quality.
Emerging 6DoF video content is supported in two formats: PtCl and MVD. PtCl VoD utilizes the V-PCC encoding standard with higher compression rates to reduce bandwidth requirements. Premium users benefit from the superior quality All-Intra (AI) encoding mode, while regular users utilize the default Random Access (RA) mode. For PtCl live video, the faster G-PCC encoding standard is employed.
For MVD video format, we employ the MIV encoding standard to effectively manage multi-view and depth information. Premium users benefit from the higher-quality MIV View mode, which includes a 120-degree Field of View (FoV) to prevent quality fluctuations or stuttering in the FoV area caused by sudden head movements.
Additionally, LCEVC is implemented to reduce video complexity and required bandwidth. This technology separates the basic layer and enhancement layer, allowing RSUs to allocate bandwidth resources more flexibly. This ensures that vehicle passengers can access at least the basic layer during network congestion, thereby maintaining a satisfactory viewing experience.
Figure 2 illustrates the architecture of the bandwidth allocation algorithm. The ellipsis on the right side of the image indicates that all RSUs and vehicles contain the same corresponding modules as depicted in the diagram. Before departure, the app installed in the vehicle activates the “Route Planning and Adjustment Module for Vehicles”. This module plans the route based on the passenger’s destination and real-time traffic conditions, estimating time periods for each road segment and communicating this information to the RSUs located along the route.
When a passenger in a vehicle activates a multimedia application while on the road, the “Bandwidth Calculation Module for Vehicle Video Applications” is initiated by the app on the passenger’s vehicle. This module retrieves application requirements and specifications from the application provider’s server and sends the bandwidth demand to the RSUs along the vehicle’s route.
Upon receiving a bandwidth demand request, the corresponding RSUs process it using the “Bandwidth Assignment Coordination” module. For real-time video applications, this module strives to meet the immediate bandwidth requirements. For non-real-time applications, the “Bandwidth Allocation for Non-Real-Time Applications” module is activated. RSUs coordinate with CF XL-MIMO base stations covering the road segment to collectively provide services to vehicle passengers. During periods of network congestion, RSUs dynamically adjust bandwidth allocation based on the characteristics of each application. Real-time applications, which require immediate transmission and are highly sensitive to delays, are prioritized to prevent user dissatisfaction due to interruptions. Conversely, video segments of non-real-time applications can be pre-downloaded during less congested road sections.
Furthermore, RSUs prioritize the bandwidth needs of premium users over regular users. This means that bandwidth allocated to regular users may be reallocated to premium users to ensure superior service quality. If bandwidth demand remains unmet or if bandwidth originally assigned to regular users is reassigned to premium users, RSUs activate the “Vehicle Fleet Bandwidth Coordination“ module to establish a cooperative fleet. This module utilizes V2V communication to relay content, effectively using the remaining bandwidth in non-congested road sections.
During video application operation, the vehicle’s “Bandwidth Calculation for Vehicle Video Applications” module continuously monitors network conditions and adjusts bandwidth usage. When the vehicle traverses congested road sections with heavy network traffic, the module dynamically adjusts the video quality to decrease bandwidth demand and prevent video stuttering, thereby maintaining user satisfaction. Conversely, in non-congested road sections, the module enhances the video quality of real-time videos to ensure stable and smooth playback, providing users with an enhanced viewing experience across different network environments. For non-real-time video, segments will be preloaded as much as possible to ensure playback at the expected time frames.

2.1. Route Planning and Adjustment Module for Vehicles

Before departure, vehicle passengers use the app installed in the vehicle to activate this module, planning the shortest driving route with instant traffic updates from the cloud. The system calculates the optimal route based on current traffic conditions and historical data, aiming to reach the destination in the shortest possible time. During the journey, this module continuously monitors real-time traffic conditions and dynamically adjusts the route based on real-time changes. For instance, if a traffic accident, road construction, or other unforeseen events occur ahead, the system will immediately recalculate an alternative route to help the passenger avoid congested or hazardous road sections. Simultaneously, this module periodically estimates the travel time for subsequent segments and makes predictions based on actual driving conditions.
If the vehicle passenger changes plan, such as adding a stop or changing the destination, the system will replan the route and estimated arrival time based on the new requirements. Additionally, if the estimated time of arrival for road sections differs significantly from the original schedule, this module will recalculate the shortest driving route and arrival time, making timely adjustments as needed. To ensure that RSUs obtain the latest traffic information, this module will send updated relevant information to the RSUs along the route. This information includes the times the vehicle passes through each section and the current traffic conditions. Upon receiving this information, the RSUs will update their databases, providing other vehicles with more accurate traffic information and road condition forecasts.
The execution steps of this module are as follows:
Step 1:
Before the vehicle departs, real-time traffic information is retrieved from the cloud. Based on the destination set by the vehicle passenger and the average travel time returned by each segment RSU, the shortest driving route is selected using the Dijkstra algorithm as follows:
R σ = ( c 1 σ , c 2 σ , , c h σ 1 σ , c h σ σ ) ,  
where c 1 σ and c h σ σ represent the starting point and the final destination of the driving route for vehicle σ, respectively.
Step 2:
The following equations are used to calculate the time for the vehicle to arrive at each road section and transmit the estimated time to the RSU along the route:
r t c i + 1 σ = r t c i σ + R D c i σ , c i + 1 σ ( r t c i σ ) + I D c i σ , c i + 1 σ ( r t c i σ ) ,   1 i < h σ   ,
r t c 1 σ = C u r T i m e
where c i σ represents the i-th intersection that vehicle σ passes through, and C u r T i m e is the current time. R D c i σ , c i + 1 σ ( r t c i σ ) denotes the time it takes for the vehicle to pass through the segment connecting c i σ and c i + 1 σ after arriving at road section c i σ at time r t c i σ , while I D c i σ , c i + 1 σ ( r t c i σ ) stands for the time spent by the vehicle due to traffic control before entering the segment connecting c i σ and c i + 1 σ after arriving at road section c i σ at time r t c i σ . Here, the values of R D c i σ , c i + 1 σ ( · ) and I D c i σ , c i + 1 σ ( · ) are both predicted using Support Vector Regressions.
Step 3:
This module operates in the background execution mode.
Step 4:
Whenever the vehicle arrives at the next intersection on the route, correct t the pass time period for each subsequent segment as follows:
r t ^ c i + 1 σ = r t ^ c i σ + R D c i σ , c i + 1 σ ( r t ^ c i σ ) + I D c i σ , c i + 1 σ ( r t ^ c i σ ) ,   1 i < h i σ
where c 1 σ represents the current segment of the route for σ, r t ^ c i σ denotes the time when σ arrives at intersection c i σ . R D c i σ , c i + 1 σ ( r t ^ c i σ ) stands for the time it takes for σ to pass through the segment connecting c i σ and c i + 1 σ after arriving at intersection c i σ at time r t ^ c i σ , while I D c i σ , c i + 1 σ ( r t ^ c i σ ) represents the time spent by the vehicle due to traffic control before entering the segment connecting c i σ and c i + 1 σ after arriving at intersection c i σ at time r t ^ c i σ .
Step 5:
After transmitting the corrected arrival times at each intersection to the management RSUs of each segment, return to Step 3 to continue execution.
Step 6:
If the vehicle passenger changes the itinerary temporarily, recalculate the shortest driving route using the Dijkstra algorithm as follows:
R ^ σ = ( c ^ 1 σ , c ^ 2 σ , , c ^ h ^ σ 1 σ , c ^ h ^ σ σ ) ,
r t c ^ i + 1 σ = r t c ^ i σ + R D c ^ i σ , c ^ i + 1 σ ( r t c ^ i σ ) + I D c ^ i σ , c ^ i + 1 σ ( r t c ^ i σ ) ,   1 i < h ^ σ ,
r t c ^ 1 σ = C u r T i m e + R D c j σ ( r t c j σ ) · r r l c j σ , c j + 1 σ r l c j σ , c j + 1 σ ,   1 j < h σ ,
c ^ 1 σ = c j + 1 σ ,   1 j < h σ ,
where c ^ 1 σ and c ^ h ^ σ σ represent the next intersection and the destination of EV σ‘s current driving route, respectively, and c j σ is the index value of the previous intersection of the current segment. r l c j , j + 1 σ and r r l c j , j + 1 σ denote the length of the current segment and the distance from σ‘s current position to the next intersection, respectively.
Return to Step 2 to continue execution.

2.2. Bandwidth Calculation Module for Vehicle Video Applications

Once the vehicle passenger initiates the video streaming application, this module will obtain the video requirements and specifications from the application software provider’s server based on the preset video quality. For VR applications such as 360-degree video and 6DoF video, the vehicle uploads the head motion trajectory from the user’s historical database to the server. This allows the software provider to predict the FoV of the requested video and return the requirements and specifications for the base layer and enhancement layer of the video. Notably, since the G-PCC encoder directly compresses 3D PtCl and cannot be enhanced using LCEVC technology, the server returns the video requirements and specifications without enhancing it into the base layer and enhancement layer if the video application type is a 6DoF PtCl real-time application.
The execution steps of this module are as follows:
Step 1:
When the vehicle passenger initiates the video streaming application, this module retrieves the video specifications from the video streaming application software provider’s server. If the video type is a VR application, the user’s historical head motion trajectory is uploaded to the server. This allows the server to predict the FoV and return the specifications required for the requested video’s base layer and enhancement layer.
Step 2:
The estimated arrival times at each section of the route, along with the video application requirements and specifications, are uploaded to the RSUs along the route. Subsequently, this module requests the governing RSU over the current road segment to activate its “Bandwidth Assignment Coordination” module and awaits the results.
Step 3:
If the results received in the previous step indicate that the required bandwidth can be satisfied, proceed to the next step. Otherwise, proceed to Step 9 accordingly.
Step 4:
If the video quality adjustments have reached the system-set limit, proceed to Step 10; otherwise, continue with the execution.
Step 5:
If the video application type is not 6DoF PtCl video, or if the system has reached its maximum allowable quantity of 6DoF PtCl videos, proceed to Step 8. Otherwise, continue with the execution.
Step 6:
If the 6DoF PtCl video is live video, increase the point density in the user’s video according to the following equation and return to Step 2. Otherwise, continue to the next step.
u p c t j σ , v R = { p c p l σ , v + 1 R i f   u r s u t j σ , v > 0 ,   p l σ , v p l ¯ v ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ p l ¯ v others
d p c t j σ , v R = { p c p l σ , v + 1 R i f   d r s u t j σ , v > 0 ,   p l σ , v p l ¯ v ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ p l ¯ v others
where u p c t j σ , v R and d p c t j σ , v R represent the point density uploaded and downloaded for 6DoF live video by vehicle passenger σ at time t j σ , v , respectively. u r s u t j σ , v and d r s u t j σ , v respectively stand for the management RSUs where the upload and download bandwidths are insufficient for vehicle σ during the period of passing through the segment connecting c i σ and c i + 1 σ . t 1 σ , v and t e σ , v σ , v denote the current and ending times of v , respectively. c i σ and c h σ σ are the index values of the i-th intersection and the destination of the driving route. p l ¯ v represents the system-set maximum point density for vehicle passengers, while p c p l σ , v R is the required point density for the video set by the vehicle passenger with the level set to p l σ , v .
Step 7:
Increase the point density of VoD PtCl for the user according to the following equation, then return to Step 2.
u p c t j σ , v B = { p c p l σ , v + 1 B i f   u r s u t j σ , v > 0 ,   p l σ , v p l ¯ v ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ p l ¯ v others
d p c t j σ , v B = { p c p l σ , v + 1 B i f   d r s u t j σ , v > 0 ,   p l σ , v p l ¯ v ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ p l ¯ v others
u p c t j σ , v E = { p c p l σ , v + 1 E i f   u r s u t j σ , v > 0 ,   p l σ , v p l ¯ v ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ p l ¯ v others
d p c t j σ , v E = { p c p l σ , v + 1 E i f   d r s u t j σ , v > 0 ,   p l σ , v p l ¯ v ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ p l ¯ v others
where u p c t j σ , v B and u p c t j σ , v E represent the resolution of the base layer and enhancement layer PtCl uploaded for 6DoF VoD by the vehicle passenger, respectively. Similarly, d p c t j σ , v B and d p c t j σ , v E denote the resolution of the base layer and enhancement layer PtCl downloaded for 6DoF VoD, respectively. v l ¯ v represents the system-set maximum resolution for video of vehicle passengers, while v l σ σ , v stands for the bandwidth requirement for video resolution as determined by the vehicle passenger, which is specified at the level specified by κ σ , v .
Step 8:
If the video resolution has not reached the system-set maximum, increase the video resolution and return to Step 2. Otherwise, proceed to Step 10.
u v r t j σ , v B = { v l σ σ , v + 1 B i f   u r s u t j σ , v > 0 , κ σ , v v l ¯ v ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ v l ¯ v others
u v r t j σ , v E = { v l σ σ , v + 1 E i f   u r s u t j σ , v > 0 , κ σ , v v l ¯ v ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ v l ¯ v others
d v r t j σ , v B = { v l σ σ , v + 1 B i f   d r s u t j σ , v > 0 , κ σ , v v l ¯ v ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ v l ¯ v others
d v r t j σ , v E = { v l σ σ , v + 1 E i f   d r s u t j σ , v > 0 , κ σ , v v l ¯ v ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ v l ¯ v others
where u v r t j σ , v B and u v r t j σ , v E represent the resolution of the base layer and enhancement layer uploaded for video by the vehicle passenger, respectively. Similarly, d v r t j σ , v B and d v r t j σ , v E denote the resolution of the base layer and enhancement layer downloaded for video by the vehicle passenger, respectively. v l ¯ v represents the system-set maximum resolution for video of vehicle passengers, while v l κ σ , v E is the bandwidth requirement for video resolution specified at level κ σ , v . .
Step 9:
If the bandwidth demand of the application is not yet satisfied and the video resolution has not reached the system-set minimum, decrease the video resolution and return to Step 2. Otherwise, proceed to the next step.
Step 10:
Notify the vehicle passenger of the returned result. If the result indicates that the bandwidth demand can be satisfied, continue execution. If not, notify the vehicle passenger and end this module.
Step 11:
If the video application type used is non-real-time, check the download status of video segments. If all video segments for that application have been preloaded, end this module.
Step 12:
This module enters background execution mode.
When the vehicle updates the driving path or it arrives at the next road segment along its route, return to Step 2 to continue execution.

2.3. Bandwidth Assignment Coordination

Upon receiving information regarding user service levels, vehicle location, estimated time to traverse road sections, and requirements for the base and enhancement layers of the LCEVC-enhanced video application, this module will assess whether any CF XL-MIMO wireless access points can meet the bandwidth demands of the application. If suitable, the controlling CF XL-MIMO wireless access point unit will then establish an access point cluster [41] to allocate sufficient bandwidth based on the expected requirements of the application. During configuration, special attention will be given to ensuring that bandwidth for the base layer is prioritized before allocating resources to the enhancement layer, given that these layers can be transmitted separately.
If the application involves real-time video and encounters insufficient allocated bandwidth due to network congestion, this module will prompt the CF XL-MIMO access point control unit to redistribute bandwidth originally allocated to non-real-time applications. Real-time video requires continuous and sufficient bandwidth for a smooth user experience. Should bandwidth remain insufficient after reallocation, priority will be given to meeting the needs of premium users by reallocating bandwidth initially assigned to regular users.
If any applications still lack adequate bandwidth thereafter, the “Vehicle Fleet Bandwidth Coordination” module will activate. It will seek CF XL-MIMO access points in other non-congested road sections to supplement the lack of bandwidth.
The execution steps of this module are as follows:
Step 1:
After receiving the bandwidth requirements for the vehicle passengers’ application, categorize the type of video application.
Step 2:
If the application is real-time video, continue to the next step. Otherwise, activate the “Bandwidth Allocation for Non-Real-Time Applications” module. Then conclude the execution of this module.
Step 3:
Based on the specifications of the live video application and the estimated time taken to traverse road sections, calculate the bandwidth required for the real-time application along the vehicle route.
Step 4:
Determine if any CF XL-MIMO access point(s) within the communication range of the vehicle can supply bandwidth for the live video application:
d u b t j σ , v = β σ , v · [ ϑ σ , v u b t j σ , v ϑ σ , v ( 1 ψ σ , v ) · ( u v r t j σ , v B + σ , v · u v r t j σ , v E ) · fr σ , v ψ σ , v · u p c t j σ , v · fr σ , v ] ,     1 j e σ , v
u r s u t j σ , v = { r s u c i σ i f   d u b t j σ , v < 0 ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ 0 others  
d d b t j σ , v = ϑ σ , v d b t j σ , v ϑ σ , v ( 1 ψ σ , v ) · ( d v r t j σ , v B + σ , v · d v r t j σ , v E ) · fr σ , v ψ σ , v · d p c t j σ , v · fr σ , v ,     1 j e σ , v  
d r s u t j σ , v = { r s u c i σ i f   d d b t j σ , v < 0 ,   r t c i σ t j σ , v r t c i + 1 σ ,   1 j e σ , v , 1 i < h σ 0 others  
u p c t j σ , v = d p c t j σ , v = p c p l σ , v , 1 j < e σ , v , 1 p l σ , v p l ¯ v
u v r t j σ , v = d v r t j σ , v = v l κ σ , v , 1 j < e σ , v , 1 κ σ , v v l ¯ v
t j + 1 σ , v = t j σ , v + , 1 j < e σ , v
a t p 1 σ t 1 σ , v t e σ , v σ , v a t p h σ σ
where the binary flag β σ , v = 1 indicates that real-time application v is bidirectional, while β σ , v = 0 indicates it is unidirectional. Binary indicator ψ σ , v = 1 signifies v as a 6DoF PtCl application, while binary indicator σ , v = 1 indicates that the encoding of v also includes the enhancement layer. t 1 σ , v and t e σ , v σ , v denote the current and end times of v , respectively, while c i σ and c h σ σ represent the indices of the ith intersection and the destination along the driving route, respectively. u b t j σ , v ϑ σ , v and d b t j σ , v ϑ σ , v respectively stand for the upload and download bandwidth that the vehicle can be allocated for bidirectional video v when passing through the transmission range of CF XL-MIMO access point ϑ σ , v at time slot t j σ , v . u p c t j σ , v and d p c t j σ , v denote the point density uploaded and downloaded for 6DoF PtCl video v , respectively; u v r t j σ , v B and d v r t j σ , v B represent the resolution of the uploaded and downloaded video base layer for v , respectively, u v r t j σ , v E and d v r t j σ , v E stand for the uploaded and downloaded enhancement layer, respectively, while fr σ , v denotes the designated frame rate per second. r s u c i σ represents the index value of the management RSU when the vehicle passes through segment c i σ . When the values of d u b t j σ , v and d d b t j σ , v are less than zero, it indicates that the upload or download bandwidth provided by CF XL-MIMO access point(s) for segment c i σ cannot meet the minimum bandwidth requirement of v ; at this time, u r s u t j σ , v and d d r s u t j σ , v are respectively used to indicate the management RSU of the vehicle passing through the segment c i σ where the upload and download bandwidth is insufficient at time slot t j σ , v . The number of PtCl for 6DoF PtCl video can be divided into p l ¯ v levels, which is also the upper limit of the system-set point density video for vehicle passengers, while p c p l σ , v sets the video streaming screen resolution requirement at level p l σ , v for vehicle passengers. The upper limit level of the resolution of video streaming screens for other categories of videos is v l ¯ v , and v l κ σ , v is the resolution requirement of video streaming screens set by vehicle passengers at level κ σ , v .
Step 5:
If the bandwidth requirements of the video can be met, notify the CF XL-MIMO access point control unit to provide the bandwidth vehicle passengers expected, and then return the results to the vehicle passenger. The execution of this module ends.
If not, continue.
Step 6:
To meet the bandwidth requirements of real-time applications, the module reallocates the bandwidth currently assigned to non-real-time applications during the same time slot. In order to address any remaining bandwidth deficit for non-real-time applications after this adjustment, the ‘Bandwidth Allocation for Non-Real-Time Applications’ module will subsequently be activated.
b u n t j σ , v = { μ θ μ u b t j σ , v θ μ   if   d u b t j σ , v < 0 0 others , 1 j e σ , v
d u b t j σ , v = d u b t j σ , v + b u n t j σ , v
b d n t j σ , v = { μ θ μ d b t j σ , v θ μ   if   d d b t j σ , v < 0 , 0 others   , 1 j e σ , v
d d b t j σ , v = d d b t j σ , v + b d n t j σ , v
where μ is the index value of a non-real-time application, θ μ represents the CF XL-MIMO access point that originally allocated bandwidth to μ . b u n t j σ , v and b d n t j σ , v denote the upload and download bandwidth that can be reassigned from μ to real-time application v . When the updated values of d u b t j σ , v and d d b t j σ , v are less than zero, it indicates that the bandwidth reassigned from μ to v still cannot meet the bandwidth requirements of v .
Step 7:
If the expected bandwidth of the real-time application v has been met, notify the vehicle passenger and end the execution of this module; otherwise, continue to the next step.
Step 8:
If the passenger is a premium user, proceed to the next step; otherwise, skip to step 12.
Step 9:
Allocate the bandwidth beyond the minimum required for real-time applications of regular users to real-time applications for premium users.
b u r t j σ , v = { ς θ ς u b t j σ , v θ ς   if   d u b t j σ , v < 0 0 others , 1 j e σ , v
d u b t j σ , v = d u b t j σ , v + b u r t j σ , v
b d r t j σ , v = { ς θ ς d b t j σ , v θ σ   if   d d b t j σ , v < 0 , 0 others   , 1 j e σ , v
d d b t j σ , v = d d b t j σ , v + b d r t j σ , v
where ς is the index value representing a real-time application of a regular user, θ ς represents the CF XL-MIMO access point that originally allocated bandwidth to ς. b u r t j σ , v and b d r t j σ , v denote the upload and download bandwidth that can be reassigned from ς to v , respectively. When the updated values of d u b t j σ , v and d d b t j σ , v are less than zero, it indicates that the bandwidth reassigned from ς to v still cannot meet the requirements of v .
Step 10:
If the previous step shows that either b u r t j σ , v or b d r t j σ , v is greater than zero, it indicates that part of the bandwidth for a regular user’s real-time application ς has been reassigned to the premium user. In this case, initiate the “Vehicle Fleet Bandwidth Coordination” module to reallocate bandwidth for ς.
Step 11:
At this stage, if both d u b t j σ , v and d d b t j σ , v are zero, it means that the expected bandwidth requirement of the requested application has been met. Notify the vehicle passenger and end the execution of this module. Otherwise, proceed to the next step.
Step 12:
Activate the “Vehicle Fleet Bandwidth Coordination” module to identify CF XL-MIMO access points in less congested road sections that can allocate bandwidth for application v .
Step 13:
Notify the vehicle passenger of the results and end the execution of this module.

2.4. Bandwidth Allocation for Non-Real-Time Applications

Since the video content of a non-real-time application is stored on the servers of application software providers, video segments can be pre-downloaded to the memory of the requesting passenger’s vehicle when bandwidth is abundant. This module adjusts the ratio of pre-downloaded video segments based on the remaining travel time and network conditions, and inversely adjusts according to the proportion of remaining segments stored in the vehicle. When the network is uncongested, RSUs can allocate more bandwidth to pre-download additional segments for the passenger’s later viewing. This approach helps avoid yielding bandwidth to real-time applications in congested road segments and prevents service interruptions or quality degradation of non-real-time videos. Conversely, when the network is crowded, this module reduces the number of pre-downloaded segments to free up bandwidth for serving more users.
The execution steps of this module are as follows:
Step 1:
Considering the specifications of the base and enhancement layers of the non-real-time video, network conditions, the remaining travel time of the vehicle, and the number of remaining segments stored in the vehicle’s memory, the bandwidth allocated to the non-real-time application v for the remaining route can be calculated as follows:
Max ( s ¯ σ , v ) ,
subject to:
δ τ s σ , v · Δ · ϑ σ d b τ s σ , v ϑ σ , v s l s σ , v · ( d v r τ s σ , v B + d v r τ s σ , v E ) · fr σ , v ,       1 δ τ s σ , v ,   i s σ , v s s ¯ σ , v ,
1 i s σ , v s ¯ σ , v S σ , v ,
τ 1 σ , v = s p t σ , v ,
r t c 1 σ < s p t σ , v < e p t σ , v r t c h σ σ ,
C u r T i m e < τ i s σ , v σ , v ,
e p t σ , v r t c h σ σ C u r T i m e + s = i s σ , v s ¯ σ , v s l s σ , v ,
s p t σ , v τ s σ , v < τ s σ , v + δ τ s σ , v · Δ < e p t σ , v ,   i s σ , v s s ¯ σ , v ,
r t c i σ τ s σ , v r t c i + 1 σ ,   if   ρ c i σ , c i + 1 σ ρ _ ,   1 i < h σ
d v r τ s σ , v B = d v r τ s σ , v E = v l σ σ , v ,         i s σ , v s s ¯ σ , v ,    
b u f σ , v = b u f σ , v + s = i s σ , v s ¯ σ , v [ s l s σ , v · ( d v r τ s σ , v B + d v r τ s σ , v E ) · fr σ , v ] ,  
where s ¯ σ , v represents the number of video segments of v that this module allows pre-downloading, τ s σ , v is the time for pre-downloading video segment s of non-real-time application v , S σ , v denotes the total number of video segments for v , and i s σ , v stands for the index of the first video segment that has not been downloaded yet. d b τ s σ , v ϑ σ , v is the bandwidth provided by the CF XL-MIMO access point ϑ σ , v for pre-downloading v during time τ s σ , v over duration Δ , and δ τ s σ , v is the number of time slots for consecutive downloading of video segments within the communication range of d b τ s σ , v ϑ σ , v . s l s σ , v represents the time duration of downloading video segment s of v . d v r τ s σ , v B and d v r τ s σ , v E indicate the resolutions of the base and enhancement layers of v , respectively, while fr σ , v denotes the frame rate per second. C u r T i m e is the current time, s p t σ , v and e p t σ , v stand for the start and end times of playing v , respectively, ρ _ is the lower limit of traffic flow that indicates the road segment is congested, and ρ c i σ , c i + 1 σ represents the traffic flow connecting road sections c i σ and c i + 1 σ . v l σ σ , v is the resolution requirement for the κ σ , v level of video screen set by the vehicle passenger, and b u f σ , v represents the pre-downloaded video segments stored in the buffer.
Step 2:
Examine whether the CF XL-MIMO access points from the previous step can meet the minimum bandwidth requirements for the base and enhancement layers of v as follows:
d p t σ , v = C u r T i m e + s = i s σ , v s ¯ σ , v s l s σ , v r t c h σ σ ,  
where d p t σ , v is the time duration of pre-downloaded video v stored in the buffer.
Step 3:
If d p t σ , v 0, it indicates that the required bandwidth for both the base and enhancement layers of v can be pre-downloaded. In this case, the CF XL-MIMO access point control unit is notified, and the result is returned to the vehicle passenger, concluding the execution of this module. Otherwise, proceed to the next step.
Step 4:
Calculate the bandwidth available for the remaining travel distance of the non-real-time application based on the specifications of the base layer v :
Max ( s ¯ σ , v ) ,
The above equation depends on:
δ τ s σ , v · Δ · ϑ σ d b τ s σ , v ϑ σ , v s l s σ , v · d v r τ s σ , v B · fr σ , v ,       1 δ τ s σ , v ,   i s σ , v s s ¯ σ , v ,
Equations (37)–(45)
Step 5:
If the previous step can meet the bandwidth requirements for the base layer of v , then notify the CF XL-MIMO access point control unit to provide the required bandwidth for the base layer to the vehicle, and return the result to the vehicle passenger, concluding the execution of this module. Otherwise, continue to the next step.
Step 6:
If the vehicle passenger is categorized as a regular user, proceed to Step 9.
Step 7:
Consult the RSU of the remaining road segments along the route to reallocate the bandwidth originally assigned to non-real-time applications of regular users to premium users.
Max ( s ¯ σ , v ) ,
The above equation depends on:
δ τ s σ , v · Δ · φ μ d b τ σ , v φ μ s l s σ , v · d v r τ s σ , v B · fr σ , v ,       1 δ τ s σ , v ,   i s σ , v s s ¯ σ , v ,
Equations (37)–(45)
where μ is the index value representing the non-real-time application for regular users, and d b τ σ , v φ μ is the bandwidth provided by CF XL-MIMO access point φ μ for pre-downloading μ at time τ σ , v over duration Δ .
Step 8:
If the bandwidth obtained in the previous step meets the bandwidth requirements, return the result to the vehicle passenger, and conclude the execution of this module. Otherwise, continue to the next step.
Step 9:
Activate the “Vehicle Fleet Bandwidth Coordination” module to locate CF XL-MIMO access points on less congested road segments for non-real-time applications experiencing bandwidth deficits. Transfer bandwidth to these applications via inter-vehicle communication and notify the vehicle passenger of the outcome.

2.5. Vehicle Fleet Bandwidth Coordination

This module facilitates the formation of a vehicle fleet by connecting the requesting passenger’s vehicle with others within the V2V communication range. It alleviates bandwidth deficits for passengers needing to upload or download video content through the excess bandwidth provided by CF XL-MIMO access points located at less congested road segments. Once the fleet connection is successfully established, excess bandwidth from these CF XL-MIMO access points can be relayed to the requesting passenger’s vehicle located at the rear of the fleet.
The execution steps of this module are as follows:
Step 1:
The RSU connects the vehicle requesting bandwidth with other vehicles within the V2V communication range to form a fleet.
arg F σ , v Min { γ σ , v [ β σ , v ( f = 1 F σ , v ϕ r f , 1 σ , v u b t l σ , v ϕ r f , 1 σ , v d u t l σ , v ) + ( f = 1 F σ , v ϕ r f , 1 σ , v d b t l σ , v ϕ r f , 1 σ , v d d t l σ , v ) ] + ( 1 γ σ , v ) f = 1 F σ , v δ τ s σ , v · Δ · ϕ r f , 1 σ , v d b t l σ , v ϕ r f , 1 σ , v s l s σ , v d v r t l σ , v B f r σ , v ] }
subject to:
f σ , v = ( r f , 1 σ , v , r f , 2 σ , v , , r f , χ f σ , v 1 σ , v , r f , χ f σ , v σ , v ) ,       1 f < F σ , v ,
r f , χ f σ , v σ , v = ,     1 f < F σ , v ,
1 δ τ s σ , v ,   i s σ , v s s ¯ σ , v ,   if γ σ , v = 0 ,
f = 1 F σ , v δ τ s σ , v · Δ · ϕ r f , 1 σ , v d b t l σ , v ϕ r f , 1 σ , v s l s σ , v · d v r t l σ , v B · fr σ , v ,   1 l e σ , v ,   if γ σ , v = 0 ,
f = 1 F σ , v ϕ r f , 1 σ , v u b t l σ , v ϕ r f , 1 σ , v d u t l σ , v ,   1 l e σ , v , r t c k σ t l σ , v < r t c k + 1 σ ,   if   γ σ , v = 1 , β σ , v = 1 ,
f = 1 F σ , v ϕ r f , 1 σ , v d b t l σ , v ϕ r f , 1 σ , v d d t l σ , v ,   1 l e σ , v , r t c k σ t l σ , v < r t c k + 1 σ ,   if   γ σ , v = 1 ,  
u r b t l σ , v r f , i σ , v , r f , i + 1 σ , v u v b t l σ , v r f , i σ , v , r f , i + 1 σ , v ,       1 l e σ , v , 1 i < e σ , v , 1 f F k σ , v ,       if   β σ , v = 1  
d r b t l σ , v r f , i σ , v , r f , i + 1 σ , v d v b t l σ , v r f , i σ , v , r f , i + 1 σ , v ,       1 l e σ , v , 1 i < e σ , v , 1 f F k σ , v ,      
t l + 1 σ , v = t l σ , v + Δ ,     1 l e σ , v
where the binary flags γ σ , v = 1 and β σ , v = 1 indicate that the application is a real-time video and a bidirectional video, respectively. F σ , v is the num.er of fleets, f σ , v denotes the f-th fleet that relays bandwidth for v , and χ f σ , v represents the length of f σ , v . r f , 1 σ , v stands for the index of the vehicle at the head of the fleet, r f , χ f σ , v σ , v represents vehicle σ located at the end of the fleet, ϕ r f , 1 σ , v represents a CF XL-MIMO access point covering the communication range of the fleet head, and c h σ σ is the destination of σ. v faces bandwidth deficit when passing the k-th road segment connecting c k σ and c k + 1 σ . d u t l σ , v and d d t l σ , v respectively represent the upload and download bandwidth deficits of a real-time video at time t l σ , v when passing through the segment, while s is the index of the road segment where non-real-time video segments cannot be pre-downloaded at time t l σ , v . t 1 σ , v and t e σ , v σ , v represent the start and end playback times for v , respectively. u r b t l σ , v r f , i σ , v , r f , i + 1 σ , v and d r b t l σ , v r f , i σ , v , r f , i + 1 σ , v denote the remaining bandwidth that can be uploaded and downloaded from vehicle r f , i σ , v to r f , i + 1 σ , v at time t l σ , v , respectively, while u v b t l σ , v r f , i σ , v , r f , i + 1 σ , v and d v b t l σ , v r f , i σ , v , r f , i + 1 σ , v represent the total bandwidth required to upload and download from vehicle r f , i σ , v to vehicle r f , i + 1 σ , v at time t l σ , v , respectively.
Step 2:
If the fleet connection establishment fails, record the insufficient bandwidth for v and proceed to Step 4. Otherwise, continue to execute the next step.
Step 3:
Once the fleet is formed, bandwidth is forwarded to v using V2V communication. This module evaluates whether the CF XL-MIMO access points supporting the fleet meet v’s bandwidth requirements as follows:
d u t l σ , v = { f = 1 F σ , v ϕ r f , 1 σ , v u b t l σ , v ϕ r f , 1 σ , v d u t l σ , v       if   β σ , v = 1   & γ σ , v = 1 0 others ,
d d t l σ , v = { f = 1 F σ , v ϕ r f , 1 σ , v d b t l σ , v ϕ r f , 1 σ , v d d t l σ , v       if   γ σ , v = 1 0 others ,
d n r t l σ , v = { f = 1 F σ , v δ τ s σ , v · Δ · ϕ r f , 1 σ , v d b t l σ , v ϕ r f , 1 σ , v s l s σ , v · d v r t l σ , v B · fr σ , v       if   γ σ , v = 0 0 others ,
where t j σ , v denotes the time period during which the application faces a bandwidth deficit. If d u t l σ , v is negative, it indicates that the real-time application does not obtain sufficient bandwidth for uploading video at time t l σ , v . Similarly, if d d t l σ , v is negative, it signifies that the application does not obtain sufficient bandwidth for downloading video at that time. If d n r t l σ , v is negative, it implies that the non-real-time video segment’s bandwidth is insufficient at that time.
Step 4:
If the fleet formed in Step 1 has successfully met the expected bandwidth for v after forwarding the bandwidth, notify the vehicle passenger of the result, and this module will terminate its execution.
Otherwise, notify the vehicle passenger of the result, and this module will enter background execution mode.
At the next intersection on the vehicle’s route, it will resume at Step 1 and continue its execution.

3. Simulation Results and Discussion

In this study, the proposed algorithm was simulated on a personal computer equipped with an Intel Core i7 2.9 GHz CPU and 64 GB RAM. Historical traffic data were sourced from a website that provides traffic volume counts for New York City [42]. In-vehicle video applications are categorized into live video and VoD, including 2D video, 360-degree video, and 6DoF video. The 6DoF video format includes two types: PtCl video and MIV video. The simulated data for the aforementioned video applications were obtained from statistics on the user base of VoD and live video streaming platforms [43], estimates of market growth and share for immersive video [44], and projections of user numbers for 360-degree video and interactive immersive 6DoF video [45]. Additionally, factors such as the relative ease of producing MIV format compared to PtCl format in 6DoF video were considered in determining the counts and ratios for various applications.
Table 1 shows the minimum bandwidth requirements for various resolutions of 2D live video and 2D VoD [46]. Live video is encoded using AVC, while VoD is encoded using VVC, with both enhanced into a base layer and an enhancement layer through LCEVC [8,47].
Table 2 and Table 3 show the minimum bandwidth requirements for 360-degree VoD and 360-degree live video [48,49], respectively. For 360-degree live video, AVC encoding is used, while 360-degree VoD employs VVC encoding, both enhanced with LCEVC. The 360-degree video is segmented into multiple tiles, with tiles within the FoV range transmitted at normal resolution, and tiles outside the FoV at the lowest resolution [50]. Premium users have a 120-degree FoV setting, allowing for a broader video view, whereas regular users have the standard 90-degree FoV setting.
Table 4 and Table 5 detail the minimum bandwidth requirements for 6DoF PtCl video, consolidating specifications from several studies [17,23,51,52,53] according to the density of the PtCl. Table 4 outlines the bandwidth needs for 6DoF PtCl VoD using V-PCC technology. VVC encoding is used for PtCl compression, enhanced by LCEVC. The compression mode, either All-Intra (AI) or Random Access (RA), influences both the compression level and video quality of PtCl. AI mode delivers high-quality video, while RA mode requires half the bitrate of AI mode but with reduced quality. Premium users are provided with the AI encoding mode and a 120-degree FoV, whereas regular users receive the RA mode and a 90-degree FoV, which is more bitrate-efficient. Table 5 presents the bandwidth requirements for 6DoF PtCl live video using G-PCC technology. According to [23], there are three different PtCl processing methods based on FoV: Occlusion Visibility (OV), which excludes occluded PtCl; Distance Visibility (DV), which adjusts PtCl density based on viewing distance; and Viewport Visibility (VV), which transmits PtCl only within and around the FoV. Premium users utilize the OV processing method, while regular users employ a mixed strategy of OV, DV, and VV to minimize bandwidth requirements.
Table 6 and Table 7 outline the minimum bandwidth requirements for MIV VoD and MIV live video, respectively [13,14]. In contrast to the PtCl format, MIV video is produced based on 2D video, leading to bandwidth requirements categorized by video resolution. MIV VoD employs VVC encoding enhanced with LCEVC. Conversely, MIV live video utilizes AVC encoding enhanced with LCEVC. Users can select between different configuration modes: MIV View mode and MIV Atlas mode [15]. In View mode, specific details in the atlas are preserved after MIV video synthesis, saving approximately 19% bitrate. Atlas mode involves trimming details in the atlas, resulting in a certain loss of detail and a 39% bitrate reduction. Premium users opt for MIV View mode, while regular users opt for MIV Atlas mode.
In this study, three wireless communication technologies across different frequency bands are utilized. THz [54,55] and mmWave [56] communications are employed to provide bandwidth for vehicle passengers’ applications via V2I communication, while Li-Fi technology within the realm of optical wireless communication [57] is used to relay bandwidth for vehicle passengers’ applications via V2V communication. Table 8 details the maximum achievable bandwidth and transmission distances for these wireless communication technologies.
Figure 3 illustrates the variation in the number of vehicles throughout the day. The graph highlights significant peaks during the morning commute from 7 a.m. to 10 a.m. and the evening commute from 5 p.m. to 8 p.m., with peak values observed at 9 a.m. and 7 p.m., respectively. Vehicle flow increases notably during these peak periods. Between 11 a.m. and 4 p.m., outside of the commute peaks, vehicle flow remains relatively stable, consistently above 3700 vehicles. As night approaches, vehicle flow gradually declines, with the lowest traffic occurring during the late night to early morning hours.
Figure 4 illustrates the usage of video applications among vehicle passengers across various time periods, encompassing 2D video, 360-degree video, PtCl video, and MIV video, categorized into live and VoD formats. The graph demonstrates that VoD usage exceeds live usage across all video types, driven by the convenience of on-demand viewing. Furthermore, due to its widespread availability, 2D video shows the highest usage among all formats. Although the usage numbers for 360-degree video, PtCl video, and MIV video are significantly lower than for 2D video, the immersive video market is anticipated to grow significantly in the future, especially for 360-degree video, projected to rival or surpass the viewership of 2D video. In contrast, the adoption of 6DoF video technology is currently in its early stages, resulting in comparatively lower viewership numbers.
Figure 5 depicts the bandwidth requirements across various types of video applications. The graph highlights that 360-degree video demands the highest bandwidth, followed by immersive MIV video. The popularity of 360-degree video, coupled with the need for extensive data transmission to support a panoramic viewing experience, significantly drives up its bandwidth requirements compared to other video formats. In contrast, immersive PtCl and MIV videos, while also bandwidth-intensive, exhibit overall demands similar to that of 2D video due to their lower market penetration. MIV video, which involves synthesizing multiple videos during encoding, consequently incurs slightly higher bandwidth requirements than PtCl video. Moreover, live video applications, which prioritize low-latency requirements, employ encoding techniques that minimize latency but lead to lower compression rates and therefore higher bandwidth demands compared to VoD.
Figure 6 provides a visual representation of the bandwidth allocation situation before the proposed bandwidth allocation approach is applied. As the number of vehicles on the network increases, there is a corresponding rise in the total demand for bandwidth. This surge in demand often leads to network congestion, especially during peak traffic hours. During these congested periods, users experience significant competition for available bandwidth resources. This competition can result in adverse effects such as delays or interruptions for video applications. New passengers may find it difficult to start video sessions altogether, while existing users may encounter buffering issues or frequent pauses while streaming video content.
Figure 7 provides a detailed depiction of the bandwidth allocation strategy after the implementation of the proposed algorithm, focusing on various types of video applications. The graph prominently displays a significant increase in the allocated bandwidth specifically designated for live video applications.
At the core of this enhancement is the “Bandwidth Assignment Coordination” module, which operates within RSUs. This module assumes a critical role in efficiently managing bandwidth reassignment when demand exceeds available resources. Its primary function involves dynamically reallocating bandwidth from non-real-time applications to real-time applications, such as live video streaming, as necessary. This adaptive reallocation ensures that critical applications requiring real-time data transmission, like live video feeds, consistently receive adequate bandwidth even during peak demand periods.
In addition to the above-mentioned “Bandwidth Assignment Coordination” module, the “Vehicle Fleet Bandwidth Coordination” module, leverages advanced 6G technology and Li-Fi V2V communication within vehicular networks. This module improves the total bandwidth utilization of vehicular networks by tapping into CF XL-MIMO wireless access points across different road segments. By redistributing surplus bandwidth from CF XL-MIMO access points at less-congested road segments to vehicles experiencing shortages, it effectively addresses bandwidth deficiencies in congested areas, thereby enhancing overall network performance.
Furthermore, the system incorporates the “Bandwidth Allocation for Non-Real-Time Applications” module, which operates proactively under uncongested network conditions. This module facilitates the pre-downloading of additional video segments for deferred viewing during journeys as vehicles pass through less congested road segments. This approach ensures uninterrupted non-real-time services even during peak periods when bandwidth is primarily allocated to real-time applications, thereby preventing service disruptions or degradation in quality due to bandwidth constraints.
Figure 8 provides a detailed illustration of the dynamic changes in bandwidth demand for all video applications within congested road segments throughout the day. During the timeframe from 7:00 a.m. to 10:00 p.m., it becomes apparent that the cumulative bandwidth requirements of vehicles in these congested areas surpass the capacity that CF XL-MIMO access points can provide. This limitation is clearly depicted by the green flat curve in Figure 8, indicating that many users experience unmet bandwidth needs during peak hours due to the constraints of CF XL-MIMO access points in these segments, resulting in compromised service quality and a diminished user experience.
Conversely, the purple curve in Figure 8 highlights the efficacy of the proposed bandwidth allocation approach in addressing these challenges. By leveraging cooperative vehicle fleets and utilizing V2V communication technology, our approach facilitates the transfer of surplus bandwidth from CF XL-MIMO access points located in less congested road segments to vehicles in high-traffic areas. This strategic redistribution significantly augments available bandwidth beyond the limits of CF XL-MIMO access points in congested segments.
To summarize, Figure 8 underscores how our innovative approach effectively manages bandwidth allocation dynamics, optimizes network performance, and ensures a more robust and reliable experience for vehicle passengers using video applications in varying traffic conditions.

4. Conclusions

Many studies have traditionally focused on resource allocation in vehicular networks, primarily centered on current technologies and often overlooking the imminent advancements expected with 6G technology and emerging multimedia applications slated for integration into in-vehicle entertainment systems. This research addresses the critical area of bandwidth allocation for next-generation vehicular multimedia applications. By integrating cutting-edge wireless communication technologies such as mmWave, THz, and CF XL-MIMO, we aim to deliver efficient and stable bandwidth, ensuring uninterrupted service while vehicles are in motion. Additionally, we leverage Li-Fi technology for V2V communication, enabling flexible utilization of excess bandwidth from CF XL-MIMO access points located in less congested road segments.
In addition to these advancements, we leverage Li-Fi technology for V2V communication, allowing for the flexible utilization of idle bandwidth from adjacent road sections. This strategic use of Li-Fi enhances bandwidth availability dynamically, adapting to the varying demands across different road segments.
In order to meet the diverse demands of different video applications, we utilize a variety of encoding techniques and implement dynamic adjustments to video quality based on real-time network conditions. This strategy is designed to enhance transmission efficiency, ensuring that each video type receives adequate bandwidth and maintains a high-quality viewing experience tailored to the current network conditions.
The effectiveness of our approach is substantiated by simulation results. By implementing advanced mechanisms such as VoD pre-download strategies and techniques for bandwidth reassignment and relay, we ensure seamless video playback even during peak traffic hours. Furthermore, our fleet bandwidth transfer mechanism is essential for reallocating unused bandwidth from road sections with less congestion to those with higher traffic, leading to a significant boost of 32% in bandwidth availability during peak periods. This enhancement effectively meets user demand and significantly enhances overall viewing experiences, mitigating issues such as buffering or delays caused by insufficient bandwidth.
Although we have integrated several innovative strategies aimed at alleviating bandwidth constraints in congested road segments, thereby greatly enhancing passenger video experiences during peak traffic conditions, there are still several limitations and potential issues for further investigation.
Firstly, compared to lower frequency bands, mmWave and THz suffer from higher path losses, limiting transmission distances. Therefore, the introduction of XL-MIMO antennas aims to mitigate these propagation challenges [58]. However, due to the high deployment costs of large-scale antennas, telecommunications providers tend to deploy XL-MIMO antennas in high-traffic hotspot areas to save on operational costs [59]. Additionally, the mobility of vehicles can lead to dynamic network topologies and frequent resource reallocations, making it essential to maximize resource utilization and energy efficiency. Thus, efficient energy management is crucial for ensuring the sustainability of 6G networks. To this end, artificial intelligence (AI) can be employed to achieve adaptive resource management by leveraging historical data and real-time sensor information to predict future traffic loads, preemptively allocate resources to avoid potential network congestion, and optimize the overall user experience [60].
As the pursuit of more efficient and reliable communication technologies continues in V2X communication, issues of security and privacy protection have become increasingly prominent. Given the extensive data exchange between vehicles and infrastructure in V2X communication, including vehicle location, speed, navigation information, and passenger data, the confidentiality, integrity, and availability of these data are crucial. Blockchain has emerged as a popular research focus in recent years in the field of information security, due to its decentralization, transparency, and resistance to data tampering [61]. Combining blockchain with other emerging technologies such as edge computing and AI can enhance real-time data processing and decision-making capabilities in V2X networks. By addressing these challenges, future research can significantly contribute to the development of secure, efficient, and sustainable 6G vehicular networks, thereby advancing the field of intelligent transportation systems and the realization of smart cities.

Author Contributions

Conceptualization and methodology, C.-J.H.; software, K.-W.H.; writing and editing M.-E.J.; writing and editing Y.-H.L.; writing and editing, H.-W.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Science and Technology Council, Taiwan, for financially supporting this research under Contract Number NSTC 112-2221-E-259-006.

Data Availability Statement

Research data are available upon individual requests to the corresponding author and are intended for use by collaborators engaged in research with the corresponding author’s research team.

Conflicts of Interest

Author Kai-Wen Hu was employed by the company Lookout, Inc. The remaining authors declare that the re-search was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Feng, B.; Feng, C.; Feng, D.; Wu, Y.; Xia, X.G. Proactive content caching scheme in urban vehicular networks. IEEE Trans. Commun. 2023, 71, 4165–4180. [Google Scholar] [CrossRef]
  2. Shin, Y.; Choi, H.S.; Nam, Y.; Cho, H.; Lee, E. Particle Swarm Optimization Video Streaming Service in Vehicular Ad-Hoc Networks. IEEE Access 2022, 10, 102710–102723. [Google Scholar] [CrossRef]
  3. Alaya, B.; Sellami, L. Multilayer video encoding for QoS managing of video streaming in VANET environment. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 2022, 18, 1–19. [Google Scholar] [CrossRef]
  4. Nguyen, V.L.; Hwang, R.H.; Lin, P.C.; Vyas, A.; Nguyen, V.T. Towards the age of intelligent vehicular networks for connected and autonomous vehicles in 6G. IEEE Netw. 2022, 37, 44–51. [Google Scholar] [CrossRef]
  5. Guo, H.; Zhou, X.; Liu, J.; Zhang, Y. Vehicular intelligence in 6G: Networking, communications, and computing. Veh. Commun. 2022, 33, 100399. [Google Scholar] [CrossRef]
  6. Pourazad, M.T.; Doutre, C.; Azimi, M.; Nasiopoulos, P. HEVC: The new gold standard for video compression: How does HEVC compare with H. 264/AVC? IEEE Consum. Electron. Mag. 2012, 1, 36–46. [Google Scholar] [CrossRef]
  7. Bross, B.; Wang, Y.K.; Ye, Y.; Liu, S.; Chen, J.; Sullivan, G.J.; Ohm, J.R. Overview of the versatile video coding (VVC) standard and its applications. IEEE Trans. Circuits Syst. Video Technol. 2021, 31, 3736–3764. [Google Scholar] [CrossRef]
  8. ISO/IEC JTC 1/SC 29/WG 04 MPEG Video Coding. Verification Test Report on the Compression Performance of Low Complexity Enhancement Video Coding. Available online: https://lcevc.org/wp-content/uploads/MPEG-Verification-Test-Report-on-the-Compression-Performance-of-LCEVC-Meeting-MPEG-134-May-2021.pdf (accessed on 1 May 2024).
  9. Tu, Z.; Zong, T.; Xi, X.; Ai, L.; Jin, Y.; Zeng, X.; Fan, Y. Content adaptive tiling method based on user access preference for streaming panoramic video. In Proceedings of the 2018 IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 12–14 January 2018; IEEE: New York, NY, USA, 2018; pp. 1–4. [Google Scholar]
  10. Chen, Y.; Hannuksela, M.M.; Suzuki, T.; Hattori, S. Overview of the MVC+ D 3D video coding standard. J. Vis. Commun. Image Represent. 2014, 25, 679–688. [Google Scholar] [CrossRef]
  11. Li, L.; Wang, R.; Zhang, X. A tutorial review on point cloud registrations: Principle, classification, comparison, and technology challenges. Math. Probl. Eng. 2021, 2021, 1–32. [Google Scholar] [CrossRef]
  12. Garus, P.; Henry, F.; Jung, J.; Maugey, T.; Guillemot, C. Immersive video coding: Should geometry information be transmitted as depth maps? IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 3250–3264. [Google Scholar] [CrossRef]
  13. Shin, H.C.; Jeong, J.Y.; Lee, G.; Kakli, M.U.; Yun, J.; Seo, J. Enhanced pruning algorithm for improving visual quality in MPEG immersive video. ETRI J. 2022, 44, 73–84. [Google Scholar] [CrossRef]
  14. Vadakital VK, M.; Dziembowski, A.; Lafruit, G.; Thudor, F.; Lee, G.; Alface, P.R. The MPEG immersive video standard—Current status and future outlook. IEEE Multimed. 2022, 29, 101–111. [Google Scholar] [CrossRef]
  15. Boyce, J.M.; Doré, R.; Dziembowski, A.; Fleureau, J.; Jung, J.; Kroon, B.; Salahieh, B.; Vadakital, V.K.M.; Yu, L. MPEG immersive video coding standard. Proc. IEEE 2021, 109, 1521–1536. [Google Scholar] [CrossRef]
  16. Salahieh, B.; Bhatia, S.; Boyce, J. Multi-Pass Renderer in MPEG Test Model for Immersive Video. In Proceedings of the 2019 Picture Coding Symposium (PCS), Ningbo, China, 12–15 November 2019; IEEE: New York, NY, USA, 2019; pp. 1–5. [Google Scholar]
  17. Wu, C.H.; Hsu, C.F.; Hung, T.K.; Griwodz, C.; Ooi, W.T.; Hsu, C.H. Quantitative comparison of point cloud compression algorithms with pcc arena. IEEE Trans. Multimed. 2022, 25, 3073–3088. [Google Scholar] [CrossRef]
  18. Xiong, J.; Gao, H.; Wang, M.; Li, H.; Ngan, K.N.; Lin, W. Efficient geometry surface coding in V-PCC. IEEE Trans. Multimed. 2022, 25, 3329–3342. [Google Scholar] [CrossRef]
  19. Dumic, E.; da Silva Cruz, L.A. Subjective Quality Assessment of V-PCC-Compressed Dynamic Point Clouds Degraded by Packet Losses. Sensors 2023, 23, 5623. [Google Scholar] [CrossRef] [PubMed]
  20. Graziosi, D.; Nakagami, O.; Kuma, S.; Zaghetto, A.; Suzuki, T.; Tabatabai, A. An overview of ongoing point cloud compression standardization activities: Video-based (V-PCC) and geometry-based (G-PCC). APSIPA Trans. Signal Inf. Process. 2020, 9, e13. [Google Scholar] [CrossRef]
  21. Yang, M.; Luo, Z.; Hu, M.; Chen, M.; Wu, D. A Comparative Measurement Study of Point Cloud-Based Volumetric Video Codecs. IEEE Trans. Broadcast. 2023, 69, 715–726. [Google Scholar] [CrossRef]
  22. Shi, Y.; Venkatram, P.; Ding, Y.; Ooi, W.T. Enabling low bit-rate mpeg v-pcc-encoded volumetric video streaming with 3d sub-sampling. In Proceedings of the 14th Conference on ACM Multimedia Systems, Vancouver, BC, Canada, 7–10 June 2023; pp. 108–118. [Google Scholar]
  23. Han, B.; Liu, Y.; Qian, F. ViVo: Visibility-aware mobile volumetric video streaming. In Proceedings of the 26th Annual International Conference on Mobile Computing and Networking, London, UK, 21–25 September 2020; pp. 1–13. [Google Scholar]
  24. Li, J.; Zhang, C.; Liu, Z.; Hong, R.; Hu, H. Optimal volumetric video streaming with hybrid saliency based tiling. IEEE Trans. Multimed. 2022, 25, 2939–2953. [Google Scholar] [CrossRef]
  25. Khan, A.R.; Jamlos, M.F.; Osman, N.; Ishak, M.I.; Dzaharudin, F.; Yeow, Y.K.; Khairi, K.A. DSRC technology in Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) IoT system for Intelligent Transportation System (ITS): A review. In Recent Trends in Mechatronics Towards Industry 4.0: Selected Articles from iM3F 2020, Malaysia; Springer: Berlin/Heidelberg, Germany, 2022; pp. 97–106. [Google Scholar]
  26. Zheng, K.; Zheng, Q.; Chatzimisios, P.; Xiang, W.; Zhou, Y. Heterogeneous vehicular networking: A survey on architecture, challenges, and solutions. IEEE Commun. Surv. Tutor. 2015, 17, 2377–2396. [Google Scholar] [CrossRef]
  27. Jiang, W.; Han, B.; Habibi, M.A.; Schotten, H.D. The road towards 6G: A comprehensive survey. IEEE Open J. Commun. Soc. 2021, 2, 334–366. [Google Scholar] [CrossRef]
  28. Chowdhury, M.Z.; Shahjalal, M.; Ahmed, S.; Jang, Y.M. 6G wireless communication systems: Applications, requirements, technologies, challenges, and research directions. IEEE Open J. Commun. Soc. 2020, 1, 957–975. [Google Scholar] [CrossRef]
  29. Wu, X.; Soltani, M.D.; Zhou, L.; Safari, M.; Haas, H. Hybrid LiFi and WiFi networks: A survey. IEEE Commun. Surv. Tutor. 2021, 23, 1398–1420. [Google Scholar] [CrossRef]
  30. Farrag, M.; Al Ayidh, A.; Hussein, H.S. Conditional Most-Correlated Distribution-Based Load-Balancing Scheme for Hybrid LiFi/WiGig Network. Sensors 2023, 24, 220. [Google Scholar] [CrossRef] [PubMed]
  31. Ma, S.; Sheng, H.; Sun, J.; Li, H.; Liu, X.; Qiu, C.; Safari, M.; Al-Dhahir, N.; Li, S. Feasibility Conditions for Mobile LiFi. IEEE Trans. Wirel. Commun. 2024, 23, 7911–7923. [Google Scholar] [CrossRef]
  32. Jamuna, R.; Abarna, B.; Kumar, A.S.; Ganesh, L.; Pranesh, H. LIFI for Smart Transportation: Enabling Secure and Safe Vehicular Communication. J. Xi’an Shiyou Univ. Nat. Sci. Ed. 2023, 19, 1–8. [Google Scholar]
  33. Saravanan, M.; Deepak, R.R.; Ravinkumar, P.L.; Sivabaalavignesh, A. Vehicle communication using visible light (Li-Fi) technology. In Proceedings of the 2022 8th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India, 25–26 March 2022; IEEE, New York, NY, USA, 2022; Volume 1, pp. 885–889. [Google Scholar]
  34. Su, L.; Niu, Y.; Han, Z.; Ai, B.; He, R.; Wang, Y.; Wang, N.; Su, X. Content distribution based on joint V2I and V2V scheduling in mmWave vehicular networks. IEEE Trans. Veh. Technol. 2022, 71, 3201–3213. [Google Scholar] [CrossRef]
  35. Lin, Z.; Wang, L.; Ding, J.; Xu, Y.; Tan, B. Tracking and transmission design in terahertz V2I networks. IEEE Trans. Wirel. Commun. 2022, 22, 3586–3598. [Google Scholar] [CrossRef]
  36. Lin, Z.; Fang, Y.; Chen, P.; Chen, F.; Zhang, G. Modeling and analysis of edge caching for 6G mmWave vehicular networks. IEEE Trans. Intell. Transp. Syst. 2022, 24, 7422–7434. [Google Scholar] [CrossRef]
  37. Elhoushy, S.; Ibrahim, M.; Hamouda, W. Cell-free massive MIMO: A survey. IEEE Commun. Surv. Tutor. 2021, 24, 492–523. [Google Scholar] [CrossRef]
  38. Zhao, J.; Hu, F.; Gong, Y.; Wang, D. Downlink Resource Intelligent Scheduling in mmWave Cell-Free Urban Vehicle Network. IEEE Trans. Veh. Technol. 2024, 1–14. [Google Scholar] [CrossRef]
  39. Lu, H.; Zeng, Y.; You, C.; Han, Y.; Zhang, J.; Wang, Z.; Dong, Z.; Jin, S.; Wang, C.-X.; Jiang, T.; et al. A tutorial on near-field XL-MIMO communications towards 6G. IEEE Commun. Surv. Tutor. 2024, 1. [Google Scholar] [CrossRef]
  40. Liu, Z.; Zhang, J.; Liu, Z.; Xiao, H.; Ai, B. Double-layer power control for mobile cell-free XL-MIMO with multi-agent reinforcement learning. IEEE Trans. Wirel. Commun. 2023, 23, 4658–4674. [Google Scholar] [CrossRef]
  41. Freitas, M.; Souza, D.; da Costa, D.B.; Borges, G.; Cavalcante, A.M.; Marquezini, M.; Almeida, I.; Rodrigues, R.; Costa, J.C. Matched-decision AP selection for user-centric cell-free massive MIMO networks. IEEE Trans. Veh. Technol. 2023, 72, 6375–6391. [Google Scholar] [CrossRef]
  42. Vehicle Flow Statistics for New York City. Available online: https://www.nyc.gov/html/dot/html/about/datafeeds.shtml (accessed on 10 January 2024).
  43. 90+ Powerful Virtual Reality Statistics to Know in 2024. Available online: https://www.g2.com/articles/virtual-reality-statistics (accessed on 3 May 2024).
  44. Virtual Reality Statistics. Available online: https://99firms.com/blog/virtual-reality-statistics/#gref (accessed on 2 May 2024).
  45. 47 Latest Live Streaming Statistics For 2024: The Definitive List. Available online: https://bloggingwizard.com/live-streaming-statistics/ (accessed on 5 May 2024).
  46. YouTube Recommended Upload Encoding Settings. Available online: https://support.google.com/youtube/answer/1722171?hl=en-GB (accessed on 10 January 2024).
  47. Battista, S.; Meardi, G.; Ferrara, S.; Ciccarelli, L.; Maurer, F.; Conti, M.; Orcioni, S. Overview of the low complexity enhancement video coding (LCEVC) standard. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 7983–7995. [Google Scholar] [CrossRef]
  48. Wiredrive 360 Video Specs. Available online: https://support.wiredrive.com/hc/en-us/articles/115000282194-Wiredrive-360-Video-Specs (accessed on 10 January 2024).
  49. The Best Encoding Settings for Your 4k 360 3D VR Videos + FREE Encoding Tool. Available online: https://headjack.io/blog/best-encoding-settings-resolution-for-4k-360-3d-vr-videos/ (accessed on 10 January 2024).
  50. Zare, A.; Aminlou, A.; Hannuksela, M.M.; Gabbouj, M. HEVC-compliant tile-based streaming of panoramic video for virtual reality applications. In Proceedings of the 24th ACM international conference on Multimedia, Amsterdam, The Netherlands, 15–19 October 2016; pp. 601–605. [Google Scholar]
  51. Yu, D.; Chen, R.; Li, X.; Xiao, M.; Zhang, G.; Liu, Y. A GPU-Enabled Real-Time Framework for Compressing and Rendering Volumetric Videos. IEEE Trans. Comput. 2023, 73, 789–800. [Google Scholar] [CrossRef]
  52. Cao, C. Compression d’objets 3D Représentés par Nuages de Points. Doctoral Dissertation, Institut Polytechnique de Paris, Paris, France, 2021. [Google Scholar]
  53. Santos, C.; Tavares, L.; Costa, E.; Rehbein, G.; Corrêa, G.; Porto, M. Coding Efficiency and Complexity Analysis of the Geometry-based Point Cloud Encoder. In Proceedings of the 2024 IEEE 15th Latin America Symposium on Circuits and Systems (LASCAS), Punta del Este, Uruguay, 27 February–1 March 2024; IEEE: New York, NY, USA, 2024; pp. 1–5. [Google Scholar]
  54. Gupta, M.; Navaratna, N.; Szriftgiser, P.; Ducournau, G.; Singh, R. 327 Gbps THz silicon photonic interconnect with sub-λ bends. Appl. Phys. Lett. 2023, 123, 171102. [Google Scholar] [CrossRef]
  55. Kumar, A.; Gupta, M.; Pitchappa, P.; Wang, N.; Szriftgiser, P.; Ducournau, G.; Singh, R. Phototunable chip-scale topological photonics: 160 Gbps waveguide and demultiplexer for THz 6G communication. Nat. Commun. 2022, 13, 5404. [Google Scholar] [CrossRef]
  56. Li, X.; Xiao, J.; Yu, J. Long-distance wireless mm-wave signal delivery at W-band. J. Light. Technol. 2015, 34, 661–668. [Google Scholar] [CrossRef]
  57. LiFi. Available online: https://lifi.co/lifi-speed/ (accessed on 20 May 2024).
  58. Wang, Z.; Zhang, J.; Du, H.; Niyato, D.; Cui, S.; Ai, B.; Debbah, M.; Letaief, K.B.; Poor, H.V. A tutorial on extremely large-scale MIMO for 6G: Fundamentals, signal processing, and applications. IEEE Commun. Surv. Tutor. 2024, 1. [Google Scholar] [CrossRef]
  59. Shoaib, M.; Husnain, G.; Sayed, N.; Lim, S. Unveiling the 5G Frontier: Navigating Challenges, Applications, and Measurements in Channel Models and Implementations. IEEE Access 2024, 12, 59533–59560. [Google Scholar] [CrossRef]
  60. Noor-A-Rahim, M.; Liu, Z.; Lee, H.; Khyam, M.O.; He, J.; Pesch, D.; Moessner, K.; Saad, W.; Poor, H.V. 6G for vehicle-to-everything (V2X) communications: Enabling technologies, challenges, and opportunities. Proc. IEEE 2022, 110, 712–734. [Google Scholar] [CrossRef]
  61. Liu, Y.; Zhu, K.; Hua, W.; Zhu, Y. Noval Enabling Technology for V2X Network: Blockchain. In Communication, Computation and Perception Technologies for Internet of Vehicles; Springer Nature: Singapore, 2023; pp. 225–244. [Google Scholar]
Figure 1. Illustration of Video Applications Supported in V2X Communication.
Figure 1. Illustration of Video Applications Supported in V2X Communication.
Electronics 13 02796 g001
Figure 2. Diagram of Bandwidth Allocation for Video Applications in a Cell-Free In-Vehicle Network Architecture.
Figure 2. Diagram of Bandwidth Allocation for Video Applications in a Cell-Free In-Vehicle Network Architecture.
Electronics 13 02796 g002
Figure 3. Volume of vehicles throughout a day.
Figure 3. Volume of vehicles throughout a day.
Electronics 13 02796 g003
Figure 4. Counts of video applications initiated by passengers in vehicles throughout a day.
Figure 4. Counts of video applications initiated by passengers in vehicles throughout a day.
Electronics 13 02796 g004
Figure 5. Bandwidth requirements for video applications within a day.
Figure 5. Bandwidth requirements for video applications within a day.
Electronics 13 02796 g005
Figure 6. The bandwidth allocation for the video applications prior to the algorithm’s implementation.
Figure 6. The bandwidth allocation for the video applications prior to the algorithm’s implementation.
Electronics 13 02796 g006
Figure 7. Bandwidth allocated for video applications after applying the proposed algorithm.
Figure 7. Bandwidth allocated for video applications after applying the proposed algorithm.
Electronics 13 02796 g007
Figure 8. The available bandwidth both prior to and following the algorithm’s implementation and user bandwidth demand.
Figure 8. The available bandwidth both prior to and following the algorithm’s implementation and user bandwidth demand.
Electronics 13 02796 g008
Table 1. Minimum bandwidth requirements for 2D Video.
Table 1. Minimum bandwidth requirements for 2D Video.
Video TypeResolution/Frame RateMinimum Required BandwidthBandwidth Required for Base LayerBandwidth Required for Enhancement Layer
VoD1080P/30FPS1.644 Mbps0.987 Mbps0.657 Mbps
2K/30FPS3.288 Mbps2.304 Mbps0.984 Mbps
4K/30FPS7.193 Mbps5.036 Mbps2.157 Mbps
8K/30FPS16.443 Mbps11.511 Mbps4.932 Mbps
Live video720P/30FPS2.52 Mbps2.142 Mbps0.378 Mbps
1080P/30FPS6.3 Mbps2.52 Mbps3.78 Mbps
2K/30FPS9.45 Mbps7.56 Mbps1.89 Mbps
4K/30FPS18.9 Mbps15.12Mbps3.78 Mbps
Table 2. Minimum bandwidth requirements for 360-degree VoD.
Table 2. Minimum bandwidth requirements for 360-degree VoD.
User LevelResolution/Frame RateMinimum Required BandwidthBandwidth Required for Base LayerBandwidth Required for Enhancement Layer
Regular1080P/30FPS0.554 Mbps0.387 Mbps0.166 Mbps
2K/30FPS1.107 Mbps0.774 Mbps0.332 Mbps
4K/30FPS1.732 Mbps1.212 Mbps0.519 Mbps
8K/30FPS6.93 Mbps4.851 Mbps2.079 Mbps
Premium1080P/30FPS0.986 Mbps0.69 Mbps0.295 Mbps
2K/30FPS1.97 Mbps1.379 Mbps0.591 Mbps
4K/30FPS3.083 Mbps2.158 Mbps0.924 Mbps
8K/30FPS12.332 Mbps8.632 Mbps3.699 Mbps
12K/30FPS103.09 Mbps72.163 Mbps30.927 Mbps
Table 3. Minimum bandwidth requirements for 360-degree live video.
Table 3. Minimum bandwidth requirements for 360-degree live video.
User LevelResolution/Frame RateMinimum Required BandwidthBandwidth Required for Base LayerBandwidth Required for Enhancement Layer
Regular1080P/30FPS1.699 Mbps1.444 Mbps0.254 Mbps
2K/30FPS2.124 Mbps1.805 Mbps0.318 Mbps
4K/30FPS3.398 Mbps2.888 Mbps0.509 Mbps
8K/30FPS13.595 Mbps11.555 Mbps2.039 Mbps
Premium1080P/30FPS3.024 Mbps2.57 Mbps0.453 Mbps
2K/30FPS3.78 Mbps3.213 Mbps0.567 Mbps
4K/30FPS6.048 Mbps5.14 Mbps0.771 Mbps
8K/30FPS24.192 Mbps20.563 Mbps3.084 Mbps
12K/30FPS210.924 Mbps179.285 Mbps31.638 Mbps
Table 4. Minimum bandwidth requirements for PtCl VoD.
Table 4. Minimum bandwidth requirements for PtCl VoD.
User LevelPoint DensityMinimum Required BandwidthBandwidth Required for Base LayerBandwidth Required for Enhancement Layer
RegularLow4.545 Mbps3.181 Mbps1.363 Mbps
Medium11.067 Mbps7.746 Mbps3.32 Mbps
High16.32 Mbps11.424 Mbps4.896 Mbps
PremiumLow16.16 Mbps11.312 Mbps4.848 Mbps
Medium39.352 Mbps27.546 Mbps11.805 Mbps
High58.026 Mbps40.618 Mbps17.407 Mbps
Table 5. Minimum bandwidth requirements for PtCl live video.
Table 5. Minimum bandwidth requirements for PtCl live video.
User LevelNumber of PtClMinimum Required Bandwidth
RegularLow33.218 Mbps
Medium56.497 Mbps
High83.7 Mbps
PremiumLow541.655 Mbps
Medium921.24 Mbps
High1364.8 Mbps
Table 6. Minimum bandwidth requirements for MIV VoD.
Table 6. Minimum bandwidth requirements for MIV VoD.
User LevelResolution/Frame RateMinimum Required BandwidthBandwidth Required for Base LayerBandwidth Required for Enhancement Layer
Regular1080P/30FPS12.036 Mbps7.221 Mbps4.814 Mbps
2K/30FPS24.072 Mbps16.85 Mbps7.221 Mbps
4K/30FPS52.658 Mbps36.86 Mbps15.797 Mbps
8K/30FPS180.544 Mbps126.38 Mbps54.163 Mbps
Premium1080P/30FPS16.056 Mbps11.24 Mbps4.816 Mbps
2K/30FPS32.112 Mbps22.478 Mbps14.4 Mbps
4K/30FPS70.245 Mbps49.171 Mbps21.073 Mbps
8K/30FPS240.84 Mbps168.58 Mbps72.252 Mbps
Table 7. Minimum bandwidth requirements for MIV live video.
Table 7. Minimum bandwidth requirements for MIV live video.
User LevelResolution/Frame RateMinimum Required
Bandwidth
Bandwidth Required for Base LayerBandwidth Required for Enhancement Layer
Regular1080P/30FPS36.892 Mbps31.358 Mbps5.533 Mbps
2K/30FPS46.116 Mbps39.198 Mbps6.917 Mbps
4K/30FPS73.785 Mbps62.717 Mbps11.067 Mbps
Premium1080P/30FPS49.2 Mbps41.82 Mbps7.38 Mbps
2K/30FPS61.512 Mbps52.285 Mbps9.22 Mbps
4K/30FPS98.419 Mbps83.656 Mbps14.762 Mbps
Table 8. Wireless communication technology: Transmission distance and bandwidth.
Table 8. Wireless communication technology: Transmission distance and bandwidth.
Wireless Communication
Technology
Transmission DistanceMaximum Bandwidth Achievable
THz400 m327 Gbps
mmWave1.7 km20 Gbps
Li-Fi10 m224 Gbps
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, C.-J.; Hu, K.-W.; Jian, M.-E.; Lien, Y.-H.; Cheng, H.-W. Supporting Immersive Video Streaming via V2X Communication. Electronics 2024, 13, 2796. https://doi.org/10.3390/electronics13142796

AMA Style

Huang C-J, Hu K-W, Jian M-E, Lien Y-H, Cheng H-W. Supporting Immersive Video Streaming via V2X Communication. Electronics. 2024; 13(14):2796. https://doi.org/10.3390/electronics13142796

Chicago/Turabian Style

Huang, Chenn-Jung, Kai-Wen Hu, Mei-En Jian, Yi-Hung Lien, and Hao-Wen Cheng. 2024. "Supporting Immersive Video Streaming via V2X Communication" Electronics 13, no. 14: 2796. https://doi.org/10.3390/electronics13142796

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop