In this section, we move on to discuss results from a smart grid experimental testbed. This uses a Laptop PC as a network controller and low-cost Raspberry Pi computers to emulate client-side devices that can implement advanced smart grid applications, such as demand response. This system makes use of the UK internet network to emulate a practical smart grid system.
Impact of Compression on Cellular Communications Latency
One of the most critical parameters in smart grid communications is latency. Besides coverage analysis of NB-IoT technology, this new IoT technology’s latency characteristic is an essential factor in designing systems based on NB-IoT. As NB-IoT is not yet rolled out widely, we have tested the compression technique on the fourth generation (4G) and the third generation (3G) of cellular communication technologies in reality. In this paper, we have measured the one-way latency experimentally, which is defined as the time required for a data packet to be communicated from the transmitter to the receiver, including data compression if used.
It is worth mentioning that NB-IoT is based on the Long-Term Evolution(LTE) technology used in the 4G cellular network. Therefore, experiment related to 4G can provide a measure to evaluate the closely related technologies such as NB-IoT.
Data transmission in an IoT network has been emulated by creating a short data packet size from 50 bytes to 10 Kbytes which communicated from the client platform (Raspberry Pi 3B) to the server platform (Laptop PC) using 3G and 4G communication systems. Data sources in smart grid applications vary a lot, but for the purpose of demonstration the data used here was taken from the MIT Reference Energy Disaggregation Dataset (REDD) [
22]. This dataset comprises a set of power consumption measurements from six houses, which is converted into energy consumption values recorded every 10 min—more details can be found in [
17].
The impact of compression techniques on latency has been studied using two lossless compression algorithms, Huffman coding and Lemple–Ziv Welch(LZW). The performance of data reduction of two algorithms has been compared by calculating the space-saving ratio for those compression techniques as shown in Equation (
8) and
Table 5.
It is essential that keep in mind by applying a compression algorithm while reducing the data size, it will increase the processing time both for compression and decompression of the data packet size, as is depicted in Equation (
9).
Table 6, showing the compression and decompression processing time in a client platform (Raspberry Pi 3B+) for the selected lossless compression techniques. This type of processor is representative of what may be used in an advanced client device implementing sophisticated smart grid functions such as demand response [
23]. In simpler devices such as smart meters, it is more common to use lower power microcontroller devices, which would require a longer processing time. Nonetheless, the relative comparison of the two methods would still be reasonable. The LZW and Huffman coding algorithm’s processing time is different on a hardware platform such as RPi as a client.
Table 6 shows that the LZW compression time is much higher than the Huffman coding compression time and vice versa; the LZW decompression processing time is much less than that for the Huffman coding algorithm.
We need to keep in mind that the performance of the compression algorithm would change according to the type of data as discussed in [
24]. As seen in [
24], the Huffman algorithm can achieve a high compression ratio regardless of the data type considered, such as temperature data, humidity data, ECG data, and text files. At the same time, the LZW has poor performance on numerical data types such as temperature, humidity and ECG data, while it can perform better on compressing text files. The dataset we used in our work from [
22] is an alphanumeric data type including date, time, circuit number and power consumption. For a server platform using a strong PC, the compression and decompression algorithm differences are not too much for both compression techniques. From
Table 6, it can be predicted that using the Huffman algorithm on client platforms with weak hardware can be much more efficient than LZW. Based on the evaluation results described above, a 60–80% reduction in data packet size can be achieved with the Huffman coding algorithm which requires less than 20 ms processing time for data packet sizes up to 2 KBytes.
In this research work, we have compared wireless last-mile communication technologies as shown in
Table 7, based on estimated latency and data rate values that can be found from the literature and previous research work [
23,
25]. According to the references [
23,
26,
27] MCL (signal strength) can impact significantly on the value of the latency. The latency for two standard protocols—the transmission control protocol (TCP) and user datagram protocol (UDP)—have been simulated for a smart grid IoT network in [
28]. Our experimental results are for 3G and 4G links using a standard TCP implementation with Nagle’s algorithm activated. Results with and without compression techniques with different data packet sizes are illustrated in
Figure 8,
Figure 9 and
Figure 10. Our prediction for NB-IoT is based on our experiments on 3G and 4G technologies.
This prediction has been proved from a practical experiment applying two compression algorithms on different data packet sizes shown in
Figure 8. This figure shows the median latency value and compares latency measurements for different data packet sizes using the TCP protocol with and without applying compression techniques.
Figure 8a,b show that using Huffman coding, especially for data packet sizes less than 4kbytes, are much more efficient than using LZW on the client side.
Figure 9 and
Figure 10 show the Cumulative Distribution Function (CDF) of collected latency measurement from the testbed in detail for both 3G and 4G cellular network using LZW and Huffman coding algorithms. The red line plotted in the figures represents a 90% confidence latency value for the obtained results.
The 4G test TCP results in
Figure 9 and
Figure 10, for both Huffman coding and LZW shows more predictable behavior than the 3G results. It can be seen that Huffman coding generally provides a 10–20% lower latency than the LZW method and the uncoded case. Increasing the size of the data packet will increase the latency values. The very high latency results for 3G wireless technologies in
Figure 9 and
Figure 10 mainly is because of higher data packet loss that in detail has been presented in [
23] for transmitted data without using compression techniques.