1. Introduction
Electric mobility is accelerating worldwide, particularly in major developed countries, with the momentum expected to continue thanks to many factors, including plug-in electric vehicle (PEV) market trends; PEV user preference and awareness; and policy efforts such as zero-emission vehicle mandates, taxation, and purchase incentives [
1]. Based on charging power and energy requirements, this will impact the power systems, and it is expected to put strain on the grids from which PEVs are charging [
2]. Since reinforcing the grid incurs significant costs and typically takes years, the need to manage the ever-increasing charging demand is essential to mitigate the grid impacts [
3,
4]. Charging management can be performed either unidirectionally through the implementation of smart charging algorithms or bidirectionally, also known as vehicle-to-grid (V2G), to ensure cost-effective charging and the efficient utilization of the grid assets [
5]. It is also shown that charging management can help individuals use and integrate renewable generation [
6]. To achieve seamless charging management, coordination is required among different entities in a PEV-charging ecosystem. To facilitate charging management, communication links, therefore, need to be established among the entities.
Depending on the framework, an electric vehicle grid integration (EVGI) ecosystem may include entities such as PEVs; electric vehicle supply equipment (EVSE); and third-party operators, i.e., one or a combination of charging station operators, aggregators, energy suppliers, and grid operators [
7]. Communication protocols are a set of rules and principles that allow the entities to communicate and exchange data in real time. In the charging-coordination context, the data may include the PEV ID, charging power levels, charging schedule, real-time pricing, and so on [
8]. The communication protocols are divided into front-end and back-end protocols. The front-end protocols like IEC61851 and ISO/IEC15118 refer to the link between a PEV and EVSE, whereas the back-end protocols such as the Open Charge Point Protocol (OCPP) include communication links between EVSE and the third-party operators [
7]. Compatibility is highlighted as one of the most important factors affecting the choice of the protocols for EVGI [
9]. Therefore, interoperable PEV roaming protocols like those in mobile telecommunication are proposed as a solution for smart charging in [
10,
11].
In addition to interoperability, reliability and latency are other vital considerations to facilitate efficient and reliable EVGI [
12,
13]. Downtimes or failures could lead to charging disruptions. For this, the packet loss rate and throughput metrics are used to measure reliability [
14]. In [
14], a V2G communication architecture with several hierarchical aggregators is simulated, in which PEVs communicate with their charging station aggregators via Wi-Fi links and a fiber-optic-based Ethernet link is used to connect the hierarchical aggregators and the grid operator. It is shown that the packet delivery ratio decreases from 84% to 72% for a packet error probability of 0.0001 as the number of PEVs increases from 36 to 108. The throughput further reduces from 206 kbps to 128 kbps for 36 EVs as the packet error probability increases. The average delay varies between 2 s and 5 min for the proposed V2G system. However, low-latency communication is essential for EVGI to minimize delays in charging coordination and maintain grid stability. For example, the Enhanced Frequency Control Capability scheme in the UK requires a response time of 500 ms to dispatch a fast-frequency response [
15]. Inala et al. in [
16] simulate the impact of bit errors due to packet losses on the node voltage for the proposed V2G framework in [
14]. It is shown that V2G can be well performed at lower bit error probabilities, less than
. The proposed fuzzy logic controller can help mitigate voltage deviations due to packet losses between the charging station controller and the grid. Quinn et al. [
17] compare the reliability and availability of PEVs as ancillary service providers with and without the presence of aggregators. An optimal bidding strategy in California’s ancillary services market for a group of 30 PEVs is presented in an actual implementation in [
18]. Coordinated bidding is further studied in [
19,
20]. However, these studies assume a perfect communication system. A mixed power line and 4G communication network are considered for EVGI in [
21]. The impact of jitter delays on ancillary services is quantified by using network simulator-3. In [
22], the authors investigated the effects of wireless communication delays on the sensitivity of load-frequency control services. While there is much theoretical work and simulation-based analysis on the proposed communication architectures for EVGI, their practical validation through real-world experimentation still needs to be explored. Experimental testing is essential for validating the performance and reliability of the proposed framework in practical V2G implementations.
Emerging communication technologies used on the Internet of Things (IoT) for several public EVGI implementations are reported in [
23,
24]. These technologies comprise the third and fourth generations of cellular communication (3G and 4G), ADSL, fiber, as well as short-range Wi-Fi communication, enabling the connections of many PEV devices to the major internet connection point. These technologies are examined in the OFCOM report [
25], revealing achievable latencies of approximately 12–13 ms and 19–22 ms for fiber and ADSL connections, respectively. An average latency value of 35 ms for 4G and an average one-way latency of 45 ms for 3G are reported in [
26]. However, these latency measurements are based on short “ping” packets rather than real data packets of varying sizes, which can be more representative of actual EVGI applications and implementations.
In our preliminary work [
27], we developed a V2G test bed for a charging station to test the performance of the proposed communication infrastructure. The communication infrastructure included a Wi-Fi link within the charging station between PEVs and an EVSE, a 4G cellular network between the EVSE and the base station, and fiber optic internet between the base station and grid’s control room. The performance was assessed in terms of latency and packet losses, along with the signal strength for a single PEV user over the course of one week. Expanding upon the prior assessment, this study includes multiple PEV users across various locations, particularly within city charging parking lots. This extension is carried out to conduct a comprehensive study of UDP and TCP internet protocols for EVGI applications over a one-month period using 4G technology. Furthermore, the practical implementation is compared with a statistical model. This comparison utilizes higher-order statistical techniques to estimate the latency in a multi-PEV-users scenario, using data from just one sensor node, and assesses the accuracy of these estimates relative to the practical implementation. The remainder of this paper is organized as follows:
Section 2 describes the proposed EVGI framework and the established EVGI emulation test bed. The experimental results are presented in
Section 3. A discussion on the practicability of EVGI operations using the UK’s 4G network is made in
Section 4.
Section 5 provides concluding remarks.
3. Experimental Results and Analysis
In this section, we discuss the results from the EVGI experimental test bed. We used a laptop PC as a network controller and low-cost Raspberry Pi computers to emulate client-side devices. This system makes use of the UK internet network to emulate a practical EVGI system.
Figure 1 illustrates the EVGI use case studied where vehicles are stationed in charging parking areas situated in urban cores. In such scenarios, the strength of the signal greatly influences the performance of end-to-end delay and packet loss. To emulate practical application environments, we measured the signal strength at common university parking locations within a university’s premises.
Table 1 demonstrates that the 4G wireless signal strength is segregated into three distinct categories: poor, medium, and strong. It is vital to emphasize that in this study, data were transmitted over an approximately one-month period in one-minute intervals, and the long-term average signal was derived from one-month measurements.
The principal network performance metrics in EVGI applications encompass (1) end-to-end latency, which defines the rapidity of PEVs responding to market signals, and (2) the packet loss ratio, which is essential as vital market signals are shared. We assess both TCP and UDP protocols as potential candidates for the transport protocol. Additionally, we differentiate measurements from client to server and from that server to client, as described in the preceding section.
Figure 2a,b present the mean latency for all instances and varying data packet sizes ranging from 50 B to 10 KB for both TCP and UDP protocols, respectively. While the previously outlined findings yield valuable knowledge regarding latency, cumulative distribution functions (CDFs) are required to compare latency comprehensively. Furthermore, the majority of service-level agreements between the grid operator and market participants have latency requirements at confidence intervals of 90% or greater. With this in mind, we furnish CDF computations for both TCP and UDP across all signal strengths. These latency estimations comprise the latency from PEVs to an aggregator via a Wi-Fi link, plus the latency from the aggregator to the grid operator server via the 4G link, including the fiber broadband link between base stations and the grid operator. A substantial part of the latency can be attributable to the 4G link in this calculation. It was observed in our earlier work [
27,
29] that the Wi-Fi link latency can constitute up to 50 ms of the total latency value. However, the number of PEVs connected to the Wi-Fi access point has a significant impact on this figure. A high PEV user density per charging station aggregator can lead to congestion, creating a substantial performance bottleneck. In such situations, UDP packets might be lost irretrievably, as evidenced when comparing
Figure 2a,b, where the average UDP results are more unstable than the TCP results. This can cause unacceptable packet lags and session interruptions. For TCP packets, congestion escalates the number of packet retransmissions. We refrained from implementing the evaluation of Wi-Fi delay as it would require deploying hundreds of emulators. Instead, we used values from the existing literature [
29]. It is evident that the UDP protocol facilitates quicker data transmission but with a much larger spread in the achieved latencies than TCP.
Following this, we elaborate on the packet loss ratio for both the UDP and TCP protocols. As depicted in
Figure 3, and
Table 2 the UDP protocol tends to exhibit a significantly higher packet loss ratio compared to TCP, primarily due to TCP’s inherent capability to ensure packet delivery. In [
29], UDP was shown to achieve a lower latency, given that the packet loss rate remains within acceptable boundaries. Despite the elevated packet losses associated with UDP, accelerated UDP communication can prompt packet retransmission within a defined time frame for crucial applications, thereby increasing dependable delivery. It is noteworthy that the packet loss ratio tends to diminish for packet sizes close to the Maximum Transmission Unit (MTU), which is set at 1500 bytes, while we observe unacceptably high packet loss rates for very small packet sizes, particularly those of 100 bytes or less. This suggests that the most efficient packet size should ideally be around the MTU. If it falls short of this, padding the remainder of the packet with zeros or other information to constitute one MTU packet may be a viable strategy.
Figure 4 and
Table 3 presents the proportion of data packets lost across all three parking lots being evaluated using the test bed.
Figure 5 displays the corresponding CDF of the TCP latency data acquired from the parking lots, while the
Figure 6 plots present a detailed view of the corresponding CDF of the UDP latency.
Figure 4 and
Table 3 provides a detailed depiction of packet loss across three PEV parking lots with varying signal strengths. It is evident that the TCP packets experience the lowest packet loss, especially for smaller data packets under 1 KB, though it is less than 1%. This contrasts sharply with UDP’s performance for equivalent data packet sizes, where losses range between 5 and 45%. Notably, UDP’s packet loss for data packet sizes of 2, 4, and 10 KB remains very low at under 3%.
Figure 5 and
Figure 6 present the one-way latency (OWL) performance metrics for both TCP and UDP across all three charging parking sites. For TCP, even with large data packets, the OWL is consistently under 600 ms with a 90% confidence level. In contrast, for the same sites, UDP achieves 90% reliability in under 500 ms for data packets exhibiting minimal packet loss.
As previously mentioned, the minimum packet loss tends to occur at 1 KB, a size in close proximity to the MTU size. In
Figure 7, we draw comparisons among all endpoint IoT devices (e.g., PEVs) communicating from three distinct charging parking lots dispersed throughout the city, each with varying 4G signal strengths (potentially poor, medium, and strong) for both TCP and UDP protocols. It is observed from this figure that all PEV users are simultaneously striving to establish a connection with the server. Due to the inherent traits of the network and queuing from the initial user connecting to the server and the final one, a latency of 100 ms and below 50 ms is experienced for TCP and UDP, respectively, for the same packet size, with a 90% confidence level.
Figure 7 depicts the CDF plot of the average OWL values for 1–8 PEV users utilizing 4G last-mile technologies. It is feasible to extrapolate these results to assess the latency for
N PEV users attempting to relay crucial data to the control center simultaneously. Communication involving
N PEV users is anticipated to reach the control center more swiftly compared to a single-user scenario. Following this, we can then use the Order Statistics (OS) from our previous research [
29] to calculate the CDF plot of the OWL as follows:
where
represents the CDF for an individual link, akin to the CDF outcomes illustrated in
Figure 7. This equation is valid provided the latency on one communication link is statistically independent from any other link. This computation offers insights into the speed at which information regarding an event—for instance, a network fault—can be conveyed to the control center from a multitude of distributed PEV users within a specific area. Based on our computations for up to 64 users in our previous work [
27], the improvement becomes less significant beyond 16 devices. Increasing the number of PEV users communicating with the control center significantly reduces the latency compared to a single-user scenario, with the improvement in the 90% latency more noticeable for UDP than TCP.
The results in
Figure 8 relate to a 1KB data packet size for both the TCP and UDP protocols in a 4G context. From these figures, it is evident that when considering a larger quantity of PEV users, the network observes reasonably consistent OWL results for EVGI applications, demonstrating the validity of the model in Equation (
1) and the results presented in
Figure 7. For instance, in
Figure 8a, 90% of the packets for eight PEV users with TCP are received within 185 and 165 ms in the practical experiment and statistical model in a 4G environment, respectively. In total, 90% of the packets for the one-user scenario with 4G UDP are received within 650 ms. These OWL values can be substantially improved by increasing the number of users to eight, resulting in 90% latency values of 200 ms for both the practical experiment and the statistical model, as illustrated in
Figure 8b. It becomes apparent that as the number of users increases, the results of the statistical model and practical model converge significantly.
4. Discussion
As EVGI involves real-time communication between several entities to exchange data among them, communication latency and reliability are of paramount importance. The typical size of the exchanged data, such as the charging power level, charging schedule, and electricity prices, can be in the range of several kilobytes depending on the specific implementation and communication protocol used; a data size of up to 10 kB is considered in this study. In this regard, 4G TCP demonstrated superior latency and packet loss performance over 4G UDP. It was also observed that the data packet losses are not linear for both the TCP and UDP protocols. The most efficient data packet size among the sizes considered was found to be around 1 kB. EVGI sessions would be affected if the cumulative data exchanged were to be of larger sizes, such as several tens of kilobytes since it would require higher latencies to achieve over a 90% confidence interval. In terms of latency, both 4G TCP and UDP achieved results within 100 ms for a single PEV case, though this does not include the latency on the Wi-Fi link. Based on our previous work experience, the Wi-Fi link’s latency can be up to 50 ms. A 4G communication infrastructure is therefore said to be practical in EVGI implementations, even if we consider the fast-frequency-response-time regulations in practice, for example, 500 ms in the UK. However, it is worth noticing that the latency is very much dependent on the number of PEVs using the same base station and server. As expected, as the number of PEVs in EVGI sessions within the same base station increases, the latency reduces significantly. In this study, the EVGI with eight PEVs was tested, and it was observed that 90% of the data packets of 1 kilobyte are transmitted over 400 ms. Considering the high penetration of PEVs in practice, the latency issue arising due to the queuing delays in the network can affect the efficient EVGI operation and limit the practicality of EVGI operations, especially in scenarios where a fast response time such as a spinning reserve is required. As a result, in this study, the UK’s 4G network showed the potential to be a viable communication infrastructure for EVGI operations, especially when latency issues associated with higher PEV penetration cases are properly addressed.
Although we tested 4G technologies for IoT use cases after comparing various last-mile technologies, there are still some limitations that warrant further study and model enhancement. Factors such as the reporting rate, signal strength, bandwidth, congestion management, and the count of routers and switches are not considered in this study. Also, there can be potential security and privacy issues related to EVGI operations that need to be evaluated.
5. Conclusions
Central to successful EVGI operations is the seamless real-time communication between various EVGI entities. This, therefore, requires the investigation of communication protocols and their performance. This study assessed the data-exchange capabilities, latencies, and impact of the number of PEVs on these metrics in the context of the UK’s 4G network. It was demonstrated that while 4G TCP outperforms 4G UDP in certain respects, both can sometimes achieve sub 100 ms latencies for single-PEV scenarios, even when accounting for Wi-Fi link delays. However, at a 90% confidence level, the latency of TCP is generally significantly lower than for UDP, at around 500 ms. A noticeable reduction in latency for communicating an emergency message with an increasing number of PEVs highlights the resilience of networked PEVs, especially for rapid responses required for EVGI services.
In conclusion, while the UK’s 4G infrastructure exhibits significant potential as a foundation for EVGI operations, more practical scenarios need to be tested. This is particularly required in scenarios with high PEV penetration, ensuring that the grid’s demand for real-time, reliable data communication aligns with the rapid evolution and adoption of PEVs. Future work will look into mitigating identified latency challenges by using emerging communication technologies and strategies to optimize the EVGI.